CharRecallPrecision¶
- class mmeval.metrics.CharRecallPrecision(letter_case: str = 'unchanged', invalid_symbol: str = '[^A-Za-z0-9一-龥]', **kwargs)[source]¶
Calculate the char level recall & precision.
- Parameters
letter_case (str) –
There are three options to alter the letter cases
unchanged: Do not change prediction texts and labels.
upper: Convert prediction texts and labels into uppercase characters.
lower: Convert prediction texts and labels into lowercase characters.
Usually, it only works for English characters. Defaults to ‘unchanged’.
invalid_symbol (str) – A regular expression to filter out invalid or not cared characters. Defaults to ‘[^A-Za-z0-9u4e00-u9fa5]’.
**kwargs – Keyword parameters passed to
BaseMetric
.
Examples
>>> from mmeval import CharRecallPrecision >>> metric = CharRecallPrecision() >>> metric(['helL', 'HEL'], ['hello', 'HELLO']) {'char_recall': 0.6, 'char_precision': 0.8571428571428571} >>> metric = CharRecallPrecision(letter_case='upper') >>> metric(['helL', 'HEL'], ['hello', 'HELLO']) {'char_recall': 0.7, 'char_precision': 1.0}
- add(predictions: Sequence[str], groundtruths: Sequence[str]) → None[source]¶
Process one batch of data and predictions.
- Parameters
predictions (list[str]) – The prediction texts.
groundtruths (list[str]) – The ground truth texts.
- compute_metric(results: Sequence[Tuple[int, int, int]]) → Dict[source]¶
Compute the metrics from processed results.
- Parameters
results (list[tuple]) – The processed results of each batch.
- Returns
The computed metrics. The keys are the names of the metrics, and the values are corresponding results.
- Return type
Dict