Shortcuts

Accuracy

class mmeval.metrics.Accuracy(topk: Union[int, Sequence[int]] = (1), thrs: Optional[Union[float, Sequence[Optional[float]]]] = 0.0, **kwargs)[source]

Top-k accuracy evaluation metric.

This metric computes the accuracy based on the given topk and thresholds.

Currently, this metric supports 5 kinds of inputs, i.e. numpy.ndarray, torch.Tensor, oneflow.Tensor, tensorflow.Tensor and paddle.Tensor, and the implementation for the calculation depends on the inputs type.

Parameters
  • topk (int | Sequence[int]) – If the predictions in topk matches the target, the predictions will be regarded as correct ones. Defaults to 1.

  • thrs (Sequence[float | None] | float | None) – Predictions with scores under the thresholds are considered negative. None means no thresholds. Defaults to 0.

  • **kwargs – Keyword parameters passed to BaseMetric.

Examples

>>> from mmeval import Accuracy
>>> accuracy = Accuracy()

Use NumPy implementation:

>>> import numpy as np
>>> labels = np.asarray([0, 1, 2, 3])
>>> preds = np.asarray([0, 2, 1, 3])
>>> accuracy(preds, labels)
{'top1': 0.5}

Use PyTorch implementation:

>>> import torch
>>> labels = torch.Tensor([0, 1, 2, 3])
>>> preds = torch.Tensor([0, 2, 1, 3])
>>> accuracy(preds, labels)
{'top1': 0.5}

Computing top-k accuracy with specified threold:

>>> labels = np.asarray([0, 1, 2, 3])
>>> preds = np.asarray([
    [0.7, 0.1, 0.1, 0.1],
    [0.1, 0.3, 0.4, 0.2],
    [0.3, 0.4, 0.2, 0.1],
    [0.0, 0.0, 0.1, 0.9]])
>>> accuracy = Accuracy(topk=(1, 2, 3))
>>> accuracy(preds, labels)
{'top1': 0.5, 'top2': 0.75, 'top3': 1.0}
>>> accuracy = Accuracy(topk=2, thrs=(0.1, 0.5))
>>> accuracy(preds, labels)
{'top2_thr-0.10': 0.75, 'top2_thr-0.50': 0.5}

Accumulate batch:

>>> for i in range(10):
...     labels = torch.randint(0, 4, size=(100, ))
...     predicts = torch.randint(0, 4, size=(100, ))
...     accuracy.add(predicts, labels)
>>> accuracy.compute()  
add(predictions: Sequence, labels: Sequence)None[source]

Add the intermediate results to self._results.

Parameters
  • predictions (Sequence) – Predictions from the model. It can be labels (N, ), or scores of every class (N, C).

  • labels (Sequence) – The ground truth labels. It should be (N, ).

compute_metric(results: List[Union[Iterable, numpy.number, torch.Tensor, tensorflow.Tensor, paddle.Tensor, jax.Array, flow.Tensor]])Dict[str, float][source]

Compute the accuracy metric.

This method would be invoked in BaseMetric.compute after distributed synchronization.

Parameters

results (list) – A list that consisting the correct numbers. This list has already been synced across all ranks.

Returns

The computed accuracy metric.

Return type

Dict[str, float]

Read the Docs v: latest
Versions
latest
stable
Downloads
pdf
html
epub
On Read the Docs
Project Home
Builds

Free document hosting provided by Read the Docs.