Shortcuts

HmeanIoU

class mmeval.metrics.HmeanIoU(match_iou_thr: float = 0.5, ignore_precision_thr: float = 0.5, pred_score_thrs: Dict = {'start': 0.3, 'step': 0.1, 'stop': 0.9}, strategy: str = 'vanilla', **kwargs)[source]

HmeanIoU metric.

This method computes the hmean iou metric of polygons, and accepts parameters orgnaized as follows:

  • batch_pred_polygons (Sequence[Sequence[np.ndarray]]): A batch of prediction polygons, where each element is a sequence of polygons. Each polygon is represented in the form of [x1, y1, x2, y2, …].

  • batch_pred_scores (Sequence): A batch of prediction scores, where each element is a sequence of scores.

  • batch_pred_polygons (Sequence[Sequence[np.ndarray]]): A batch of ground truth polygons, where each element is a sequence of polygons. Each polygon is represented in the form of [x1, y1, x2, y2, …].

  • batch_gt_ignore_flags (Sequence): A batch of boolean flags indicating whether to ignore the corresponding ground truth polygon. Each element is a sequence of flags.

The evaluation is done in the following steps:

  • Filter the prediction polygon:

    • Score is smaller than minimum prediction score threshold.

    • The proportion of the area that intersects with gt ignored polygon is greater than ignore_precision_thr.

  • Computing an M x N IoU matrix, where each element indexing E_mn represents the IoU between the m-th valid GT and n-th valid prediction.

  • Based on different prediction score threshold:

    • Obtain the ignored predictions according to prediction score. The filtered predictions will not be involved in the later metric computations.

    • Based on the IoU matrix, get the match metric according to match_iou_thr.

    • Based on different strategy, accumulate the match number.

  • calculate H-mean under different prediction score threshold.

Parameters
  • match_iou_thr (float) – IoU threshold for a match. Defaults to 0.5.

  • ignore_precision_thr (float) – Precision threshold when prediction and gt ignored polygons are matched. Defaults to 0.5.

  • pred_score_thrs (dict) – Best prediction score threshold searching space. Defaults to dict(start=0.3, stop=0.9, step=0.1).

  • strategy (str) – Polygon matching strategy. Options are ‘max_matching’ and ‘vanilla’. ‘max_matching’ refers to the optimum strategy that maximizes the number of matches. Vanilla strategy matches gt and pred polygons if both of them are never matched before. It was used in academia. Defaults to ‘vanilla’.

  • **kwargs – Keyword arguments passed to BaseMetric.

Examples

>>> from mmeval import HmeanIoU
>>> import numpy as np
>>> hmeaniou = HmeanIoU(pred_score_thrs=dict(start=0.5, stop=0.7, step=0.1))  # noqa
>>> gt_polygons = [[np.array([0, 0, 1, 0, 1, 1, 0, 1])]]
>>> pred_polygons = [[
...     np.array([0, 0, 1, 0, 1, 1, 0, 1]),
...     np.array([0, 0, 1, 0, 1, 1, 0, 1]),
... ]]
>>> pred_scores = [np.array([1, 0.5])]
>>> gt_ignore_flags = [[False]]
>>> hmeaniou(pred_polygons, pred_scores, gt_polygons, gt_ignore_flags)
{
    0.5: {'precision': 0.5, 'recall': 1.0, 'hmean': 0.6666666666666666},  # noqa
    0.6: {'precision': 1.0, 'recall': 1.0, 'hmean': 1.0},
    'best': {'precision': 1.0, 'recall': 1.0, 'hmean': 1.0}
}
add(batch_pred_polygons: Sequence[Sequence[numpy.ndarray]], batch_pred_scores: Sequence, batch_gt_polygons: Sequence[Sequence[numpy.ndarray]], batch_gt_ignore_flags: Sequence)None[source]

Process one batch of data and predictions.

Parameters
  • batch_pred_polygons (Sequence[Sequence[np.ndarray]]) – A batch of prediction polygons, where each element is a sequence of polygons. Each polygon is represented in the form of [x1, y1, x2, y2, …].

  • batch_pred_scores (Sequence) – A batch of prediction scores, where each element is a sequence of scores.

  • batch_pred_polygons – A batch of ground truth polygons, where each element is a sequence of polygons. Each polygon is represented in the form of [x1, y1, x2, y2, …].

  • batch_gt_ignore_flags (Sequence) – A batch of boolean flags indicating whether to ignore the corresponding ground truth polygon. Each element is a sequence of flags.

compute_metric(results: Sequence[Tuple[numpy.ndarray, numpy.ndarray, numpy.ndarray, numpy.ndarray]])Dict[source]

Compute the metrics from processed results.

Parameters

results (list[(np.ndarray, np.ndarray, np.ndnarray, np.ndarray]) – The processed results of each batch.

Returns

Nested dicts as results. The inner dict contains the “precision”, “recall”, and “hmean” scores under different prediction score thresholds, which can be indexed by the corresponding threshold value from the outer dict. The outer dict also contains the “best” key, which is the result of the best hmean score.

Return type

dict[float or “best”, dict[str, float]]

Read the Docs v: latest
Versions
latest
stable
Downloads
pdf
html
epub
On Read the Docs
Project Home
Builds

Free document hosting provided by Read the Docs.