Shortcuts

F1Score

class mmeval.metrics.F1Score(num_classes: int, mode: Union[str, Sequence[str]] = 'micro', cared_classes: Sequence[int] = [], ignored_classes: Sequence[int] = [], **kwargs)[source]

Compute F1 scores.

Parameters
  • num_classes (int) – Number of labels.

  • mode (str or list[str]) –

    There are 2 options:

    • ’micro’: Calculate metrics globally by counting the total true positives, false negatives and false positives.

    • ’macro’: Calculate metrics for each label, and find their unweighted mean.

    If mode is a list, then metrics in mode will be calculated separately. Defaults to ‘micro’.

  • cared_classes (list[int]) – The indices of the labels participated in the metric computing. If both cared_classes and ignored_classes are empty, all classes will be taken into account. Defaults to []. Note: cared_classes and ignored_classes cannot be specified together.

  • ignored_classes (list[int]) – The index set of labels that are ignored when computing metrics. If both cared_classes and ignored_classes are empty, all classes will be taken into account. Defaults to []. Note: cared_classes and ignored_classes cannot be specified together.

  • **kwargs – Keyword arguments passed to BaseMetric.

Warning

Only non-negative integer labels are involved in computing. All negative ground truth labels will be ignored.

Examples

>>> from mmeval import F1Score
>>> f1 = F1Score(num_classes=5, mode=['macro', 'micro'])

Use NumPy implementation:

>>> import numpy as np
>>> labels = np.asarray([0, 1, 4])
>>> preds = np.asarray([0, 1, 2])
>>> f1(preds, labels)
{'macro_f1': 0.4,
 'micro_f1': 0.6666666666666666}

Use PyTorch implementation:

>>> import torch
>>> labels = torch.Tensor([0, 1, 4])
>>> preds = torch.Tensor([0, 1, 2])
>>> f1(preds, labels)
{'macro_f1': 0.4,
 'micro_f1': 0.6666666666666666}

Accumulate batch:

>>> for i in range(10):
...     labels = torch.randint(0, 4, size=(20, ))
...     predicts = torch.randint(0, 4, size=(20, ))
...     f1.add(predicts, labels)
>>> f1.compute()  
add(predictions: Sequence[Union[Sequence[int], numpy.ndarray]], labels: Sequence[Union[Sequence[int], numpy.ndarray]])None[source]

Process one batch of data and predictions.

Calculate the following 2 stuff from the inputs and store them in self._results:

  • prediction: prediction labels.

  • label: ground truth labels.

Parameters
  • predictions (Sequence[Sequence[int] or np.ndarray]) – A batch of sequences of non-negative integer labels.

  • labels (Sequence[Sequence[int] or np.ndarray]) – A batch of sequences of non-negative integer labels.

compute_metric(results: Sequence[Tuple[numpy.ndarray, numpy.ndarray]])Dict[source]

Compute the metrics from processed results.

Parameters

results (list[(ndarray, ndarray)]) – The processed results of each batch.

Returns

The f1 scores. The keys are the names of the metrics, and the values are corresponding results. Possible keys are ‘micro_f1’ and ‘macro_f1’.

Return type

dict[str, float]

Read the Docs v: latest
Versions
latest
stable
Downloads
pdf
html
epub
On Read the Docs
Project Home
Builds

Free document hosting provided by Read the Docs.