Shortcuts

MeanIoU

class mmeval.metrics.MeanIoU(num_classes: Optional[int] = None, ignore_index: int = 255, nan_to_num: Optional[int] = None, beta: int = 1, classwise_results: bool = False, **kwargs)[source]

MeanIoU evaluation metric.

MeanIoU is a widely used evaluation metric for image semantic segmentation.

In addition to mean iou, it will also compute and return accuracy, mean accuracy, mean dice, mean precision, mean recall and mean f-score.

This metric supports 6 kinds of inputs, i.e. numpy.ndarray, torch.Tensor, oneflow.Tensor, tensorflow.Tensor, paddle.Tensor``and``jax.Array, and the implementation for the calculation depends on the inputs type.

Parameters
  • num_classes (int, optional) – The number of classes. If None, it will be obtained from the ‘num_classes’ or ‘classes’ field in self.dataset_meta. Defaults to None.

  • ignore_index (int, optional) – Index that will be ignored in evaluation. Defaults to 255.

  • nan_to_num (int, optional) – If specified, NaN values will be replaced by the numbers defined by the user. Defaults to None.

  • beta (int, optional) – Determines the weight of recall in the F-score. Defaults to 1.

  • classwise_results (bool, optional) – Whether to return the computed results of each class. Defaults to False.

  • **kwargs – Keyword arguments passed to BaseMetric.

Examples

>>> from mmeval import MeanIoU
>>> miou = MeanIoU(num_classes=4)

Use NumPy implementation:

>>> import numpy as np
>>> labels = np.asarray([[[0, 1, 1], [2, 3, 2]]])
>>> preds = np.asarray([[[0, 2, 1], [1, 3, 2]]])
>>> miou(preds, labels)
{'aAcc': 0.6666666666666666,
 'mIoU': 0.6666666666666666,
 'mAcc': 0.75,
 'mDice': 0.75,
 'mPrecision': 0.75,
 'mRecall': 0.75,
 'mFscore': 0.75,
 'kappa': 0.5384615384615384}

Use PyTorch implementation:

>>> import torch
>>> labels = torch.Tensor([[[0, 1, 1], [2, 3, 2]]])
>>> preds = torch.Tensor([[[0, 2, 1], [1, 3, 2]]])
>>> miou(preds, labels)
{'aAcc': 0.6666666666666666,
 'mIoU': 0.6666666666666666,
 'mAcc': 0.75,
 'mDice': 0.75,
 'mPrecision': 0.75,
 'mRecall': 0.75,
 'mFscore': 0.75,
 'kappa': 0.5384615384615384}

Accumulate batch:

>>> for i in range(10):
...     labels = torch.randint(0, 4, size=(100, 10, 10))
...     predicts = torch.randint(0, 4, size=(100, 10, 10))
...     miou.add(predicts, labels)
>>> miou.compute()  
add(predictions: Sequence, labels: Sequence)None[source]

Process one batch of data and predictions.

Calculate the following 3 stuff from the inputs and store them in self._results:

  • num_tp_per_class: the number of true positive per-class.

  • num_gts_per_class: the number of ground truth per-class.

  • num_preds_per_class: the number of predicition per-class.

Parameters
  • predictions (Sequence) – A sequence of the predicted segmentation mask.

  • labels (Sequence) – A sequence of the segmentation mask labels.

compute_confusion_matrix[source]

Compute confusion matrix with NumPy.

Parameters
  • prediction (numpy.ndarray) – The predicition.

  • label (numpy.ndarray) – The ground truth.

  • num_classes (int) – The number of classes.

Returns

The computed confusion matrix.

Return type

numpy.ndarray

compute_metric(results: List[Tuple[numpy.ndarray, numpy.ndarray, numpy.ndarray]])dict[source]

Compute the MeanIoU metric.

This method would be invoked in BaseMetric.compute after distributed synchronization.

Parameters

results (List[tuple]) –

This list has already been synced across all ranks. This is a list of tuple, and each tuple has the following elements:

  • (List[numpy.ndarray]): Each element in the list is the number of true positive per-class on a sample.

  • (List[numpy.ndarray]): Each element in the list is the number of ground truth per-class on a sample.

  • (List[numpy.ndarray]): Each element in the list is the number of predicition per-class on a sample.

Returns

The computed metric, with following keys:

  • aAcc, the overall accuracy, namely pixel accuracy.

  • mIoU, the mean Intersection-Over-Union (IoU) for all classes.

  • mAcc, the mean accuracy for all classes, namely mean pixel

accuracy. - mDice, the mean dice coefficient for all claases. - mPrecision, the mean precision for all classes. - mRecall, the mean recall for all classes. - mFscore, the mean f-score for all classes. - kappa, the Cohen’s kappa coefficient. - classwise_result, the evaluate results of each classes. This would be returned if self.classwise_result is True.

Return type

Dict

property num_classes: int

Returns the number of classes.

The number of classes should be set during initialization, otherwise it will be obtained from the ‘classes’ or ‘num_classes’ field in self.dataset_meta.

Raises

RuntimeError – If the num_classes is not set.

Returns

The number of classes.

Return type

int

Read the Docs v: latest
Versions
latest
stable
Downloads
pdf
html
epub
On Read the Docs
Project Home
Builds

Free document hosting provided by Read the Docs.