mmeval.metrics¶
mmeval.metrics
Metrics¶
Top-k accuracy evaluation metric. |
|
alias of |
|
alias of |
|
Calculate the average precision with respect of classes. |
|
MeanIoU evaluation metric. |
|
COCO object detection task evaluation metric. |
|
Proposals recall evaluation metric. |
|
Pascal VOC evaluation metric. |
|
Open Images Dataset detection evaluation metric. |
|
Compute F1 scores. |
|
HmeanIoU metric. |
|
EndPointError evaluation metric. |
|
PCK accuracy evaluation metric, which is widely used in pose estimation. |
|
PCKh accuracy evaluation metric for MPII dataset. |
|
PCK accuracy evaluation metric for Jhmdb dataset. |
|
AVA evaluation metric. |
|
Calculate StructuralSimilarity (structural similarity). |
|
Signal-to-Noise Ratio. |
|
Peak Signal-to-Noise Ratio. |
|
Mean Absolute Error metric for image. |
|
Mean Squared Error metric for image. |
|
Bilingual Evaluation Understudy metric. |
|
Sum of Absolute Differences metric for image. |
|
Gradient error for evaluating alpha matte prediction. |
|
Mean Squared Error metric for image matting. |
|
Connectivity error for evaluating alpha matte prediction. |
|
DOTA evaluation metric. |
|
Calculate Rouge Score used for automatic summarization. |
|
Calculate Natural Image Quality Evaluator(NIQE) metric. |
|
Perplexity measures how well a language model predicts a text sample. |
|
Calculate the char level recall & precision. |
|
EPE evaluation metric. |
|
AUC evaluation metric. |
|
NME evaluation metric. |
|
Calculate the word level accuracy. |