Shortcuts

AVAMeanAP

class mmeval.metrics.AVAMeanAP(ann_file: str, label_file: str, exclude_file: Optional[str] = None, num_classes: int = 81, custom_classes: Optional[List[int]] = None, verbose: bool = True, **kwargs)[source]

AVA evaluation metric.

AVA(Atomic Visual Action): https://research.google.com/ava.

This metric computes mAP using the ava evaluation toolkit provided by the author.

Parameters
  • ann_file (str) – The annotation file path.

  • label_file (str) – The label file path.

  • exclude_file (str, optional) – The excluded timestamp file path. Defaults to None.

  • num_classes (int) – Number of classes. Defaults to 81.

  • custom_classes (list(int), optional) – A subset of class ids from origin dataset. Defaults to None.

  • verbose (bool) – Whether to print messages in the evaluation process. Defaults to True.

  • **kwargs – Keyword parameters passed to BaseMetric.

Examples

>>> from mmeval import AVAMeanAP
>>> import numpy as np
>>>
>>> ann_file = 'tests/test_metrics/ava_detection_gt.csv'
>>> label_file = 'tests/test_metrics/ava_action_list.txt'
>>> num_classes = 4
>>> ava_metric = AVAMeanAP(ann_file=ann_file, label_file=label_file,
>>>                        num_classes=4)
>>>
>>> predictions = [
>>> {
>>>     'video_id': '3reY9zJKhqN',
>>>     'timestamp': 1774,
>>>     'outputs': [
>>>         np.array([[0.362, 0.156, 0.969, 0.666, 0.106],
>>>                   [0.442, 0.083, 0.721, 0.947, 0.162]]),
>>>         np.array([[0.288, 0.365, 0.766, 0.551, 0.706],
>>>                   [0.178, 0.296, 0.707, 0.995, 0.223]]),
>>>         np.array([[0.417, 0.167, 0.843, 0.939, 0.015],
>>>                   [0.35, 0.421, 0.57, 0.689, 0.427]]) ]
>>> },
>>> {
>>>     'video_id': 'HmR8SmNIoxu',
>>>     'timestamp': 1384,
>>>     'outputs': [
>>>         np.array([[0.256, 0.338, 0.726, 0.799, 0.563],
>>>                   [0.071, 0.256, 0.64, 0.75, 0.297]]),
>>>         np.array([[0.326, 0.036, 0.513, 0.991, 0.405],
>>>                   [0.351, 0.035, 0.729, 0.936, 0.945]]),
>>>         np.array([[0.051, 0.005, 0.975, 0.942, 0.424],
>>>                   [0.347, 0.05, 0.97, 0.944, 0.396]])]
>>> },
>>> {
>>>     'video_id': '5HNXoce1raG',
>>>     'timestamp': 1097,
>>>     'outputs': [
>>>         np.array([[0.39, 0.087, 0.833, 0.616, 0.447],
>>>                   [0.461, 0.212, 0.627, 0.527, 0.036]]),
>>>         np.array([[0.022, 0.394, 0.93, 0.527, 0.109],
>>>                   [0.208, 0.462, 0.874, 0.948, 0.954]]),
>>>         np.array([[0.206, 0.456, 0.564, 0.725, 0.685],
>>>                   [0.106, 0.445, 0.782, 0.673, 0.367]])]}
>>> ]
>>> ava_metric(predictions)
{'mAP@0.5IOU': 0.027777778}
add(predictions: Sequence[dict])None[source]

Add detection results to the results list.

Parameters

predictions (Sequence[dict]) –

A list of prediction dict which contains the following keys:

  • video_id: The id of the video, e.g., 3reY9zJKhqN.

  • timestamp: The timestamp of the video e.g., 1774.

  • outputs: A list bbox results of each class with the format of [x1, y1, x2, y2, score].

ava_eval(result_file: str)dict[source]

Perform ava evaluation.

Parameters

result_file (str) – The dumped results file path.

Returns

The evaluation results.

Return type

dict

compute_metric(results: list)dict[source]

Compute the AVA MeanAP.

Parameters

results (list) – A list of detection results.

Returns

The computed ava metric.

Return type

dict

read_csv(csv_file: str, class_whitelist: Optional[set] = None)tuple[source]

Loads boxes and class labels from a CSV file in the AVA format.

CSV file format described at https://research.google.com/ava/download.html.

Parameters
  • csv_file (str) – A csv file path.

  • class_whitelist (set, optional) – If provided, boxes corresponding to (integer) class labels not in this set are skipped.

Returns

  • boxes (dict): A dictionary mapping each unique image key (string) to a list of boxes, given as coordinates [y1, x1, y2, x2].

  • labels (dict): A dictionary mapping each unique image key (string) to a list of integer class labels, matching the corresponding box in boxes.

  • scores (dict): A dictionary mapping each unique image key (string) to a list of score values labels, matching the corresponding label in labels. If scores are not provided in the csv, then they will default to 1.0.

Return type

tuple (boxes, labels, scores)

read_exclusions(exclude_file: str)set[source]

Reads a CSV file of excluded timestamps.

Parameters

exclude_file (str) – The path of exclude file.

Returns

A set of strings containing excluded image keys, e.g. “aaaaaaaaaaa,0904” or an empty set if exclusions file is None.

Return type

excluded (set)

read_label(label_file: str)tuple[source]

Reads a label mapping file.

Parameters

label_file (str) – The path of label file.

Returns

  • labelmap (list): The label map in the form used by the object_detection_evaluation module - a list of {“id”: integer, “name”: classname } dicts.

  • class_ids (set): A set containing all of the valid class id integers.

Return type

tuple (labelmap, class_ids)

results2csv(results: List[dict], out_file: str)None[source]

Dump the results to a csv file.

Parameters
  • results (list[dict]) – A list of detection results.

  • out_file (str) – The output csv file path.

Read the Docs v: latest
Versions
latest
stable
Downloads
pdf
html
epub
On Read the Docs
Project Home
Builds

Free document hosting provided by Read the Docs.