Precision#
Module Interface#
Precision#
- class Precision[source]#
Compute the precision score for different types of classification tasks.
This metric can be used for binary, multiclass, and multilabel classification tasks. It creates the appropriate metric based on the
task
parameter.- Parameters:
task (Literal["binary", "multiclass", "multilabel"]) β Type of classification task.
pos_label (int, default=1) β Label to consider as positive for binary classification tasks.
num_classes (int, default=None) β Number of classes for the task. Required if
task
is"multiclass"
.threshold (float, default=0.5) β Threshold for deciding the positive class. Only used if
task
is"binary"
or"multilabel"
.top_k (int, optional) β If given, and predictions are probabilities/logits, the precision will be computed only for the top k classes. Otherwise,
top_k
will be set to 1. Only used iftask
is"multiclass"
or"multilabel"
.num_labels (int, default=None) β Number of labels for the task. Required if
task
is"multilabel"
.average (Literal["micro", "macro", "weighted", None], default=None) β
If
None
, return the precision score for each label/class. Otherwise, use one of the following options to compute the average precision score:micro
: Calculate metrics globally by counting the total truepositives and false positives.
macro
: Calculate metrics for each class/label, and find theirunweighted mean. This does not take label/class imbalance into account.
weighted
: Calculate metrics for each label/class, and findtheir average weighted by support (the number of true instances for each label/class). This alters
macro
to account for label/class imbalance.
zero_division (Literal["warn", 0, 1], default="warn") β Value to return when there is a zero division. If set to βwarnβ, this acts as 0, but warnings are also raised.
Examples
(binary) >>> from cyclops.evaluate.metrics import Precision >>> target = [0, 1, 0, 1] >>> preds = [0, 1, 1, 1] >>> metric = Precision(task=βbinaryβ) >>> metric(target, preds) 0.6666666666666666 >>> metric.reset_state() >>> target = [[0, 1, 0, 1], [0, 0, 1, 1]] >>> preds = [[0.1, 0.9, 0.8, 0.2], [0.2, 0.3, 0.6, 0.1]] >>> for t, p in zip(target, preds): β¦ metric.update_state(t, p) >>> metric.compute() 0.6666666666666666
(multiclass) >>> from cyclops.evaluate.metrics import Precision >>> target = [0, 1, 2, 0] >>> preds = [0, 2, 1, 0] >>> metric = Precision(task=βmulticlassβ, num_classes=3) >>> metric(target, preds) array([1., 0., 0.]) >>> metric.reset_state() >>> target = [[0, 1, 2, 0], [2, 1, 2, 0]] >>> preds = [ β¦ [[0.1, 0.6, 0.3], [0.05, 0.1, 0.85], [0.2, 0.7, 0.1], [0.9, 0.05, 0.05]], β¦ [[0.1, 0.6, 0.3], [0.05, 0.1, 0.85], [0.2, 0.7, 0.1], [0.9, 0.05, 0.05]], β¦ ] >>> for t, p in zip(target, preds): β¦ metric.update_state(t, p) >>> metric.compute() array([1., 0., 0.])
(multilabel) >>> from cyclops.evaluate.metrics import Precision >>> target = [[0, 1], [1, 1]] >>> preds = [[0.1, 0.9], [0.2, 0.8]] >>> metric = Precision(task=βmultilabelβ, num_labels=2) >>> metric.update_state(target, preds) >>> metric.compute() array([0., 1.]) >>> metric.reset_state() >>> target = [[[0, 1], [1, 1]], [[1, 1], [1, 0]]] >>> preds = [[[0.1, 0.7], [0.2, 0.8]], [[0.5, 0.9], [0.3, 0.4]]] >>> for t, p in zip(target, preds): β¦ metric.update_state(t, p) >>> metric.compute() array([1., 1.])
BinaryPrecision#
- class BinaryPrecision(pos_label=1, threshold=0.5, zero_division='warn')[source]#
Compute the precision score for binary classification tasks.
- Parameters:
pos_label (int, default=1) β The label of the positive class.
threshold (float, default=0.5) β Threshold for deciding the positive class.
zero_division (Literal["warn", 0, 1], default="warn") β Value to return when there is a zero division. If set to βwarnβ, this acts as 0, but warnings are also raised.
Examples
>>> from cyclops.evaluate.metrics import BinaryPrecision >>> target = [0, 1, 0, 1] >>> preds = [0, 1, 1, 1] >>> metric = BinaryPrecision() >>> metric(target, preds) 0.6666666666666666 >>> metric.reset_state() >>> target = [[0, 1, 0, 1], [0, 0, 1, 1]] >>> preds = [[0.1, 0.9, 0.8, 0.2], [0.2, 0.3, 0.6, 0.1]] >>> for t, p in zip(target, preds): ... metric.update_state(t, p) >>> metric.compute() 0.6666666666666666
MulticlassPrecision#
- class MulticlassPrecision(num_classes, top_k=None, average=None, zero_division='warn')[source]#
Compute the precision score for multiclass classification tasks.
- Parameters:
num_classes (int) β Number of classes in the dataset.
top_k (int, optional) β If given, and predictions are probabilities/logits, the precision will be computed only for the top k classes. Otherwise,
top_k
will be set to 1.average (Literal["micro", "macro", "weighted", None], default=None) β
If
None
, return the score for each class. Otherwise, use one of the following options to compute the average score:micro
: Calculate metric globally from the total count of truepositives and false positives.
macro
: Calculate metric for each class, and find theirunweighted mean. This does not take class imbalance into account.
weighted
: Calculate metric for each class, and find theiraverage weighted by the support (the number of true instances for each class). This alters βmacroβ to account for class imbalance.
zero_division (Literal["warn", 0, 1], default="warn") β Value to return when there is a zero division. If set to βwarnβ, this acts as 0, but warnings are also raised.
Examples
>>> from cyclops.evaluate.metrics import MulticlassPrecision >>> target = [0, 1, 2, 0] >>> preds = [0, 2, 1, 0] >>> metric = MulticlassPrecision(num_classes=3, average=None) >>> metric(target, preds) array([1., 0., 0.]) >>> metric.reset_state() >>> target = [[0, 1, 2, 0], [2, 1, 2, 0]] >>> preds = [ ... [[0.1, 0.6, 0.3], [0.05, 0.1, 0.85], [0.2, 0.7, 0.1], [0.9, 0.05, 0.05]], ... [[0.1, 0.6, 0.3], [0.05, 0.1, 0.85], [0.2, 0.7, 0.1], [0.9, 0.05, 0.05]], ... ] >>> for t, p in zip(target, preds): ... metric.update_state(t, p) >>> metric.compute() array([1., 0., 0.])
MultilabelPrecision#
- class MultilabelPrecision(num_labels, threshold=0.5, top_k=None, average=None, zero_division='warn')[source]#
Compute the precision score for multilabel classification tasks.
- Parameters:
num_labels (int) β Number of labels for the task.
threshold (float, default=0.5) β Threshold for deciding the positive class.
top_k (int, optional) β If given, and predictions are probabilities/logits, the precision will be computed only for the top k classes. Otherwise,
top_k
will be set to 1.average (Literal["micro", "macro", "weighted", None], default=None) β
If
None
, return the precision score for each label. Otherwise, use one of the following options to compute the average precision score:micro
: Calculate metric globally from the total count of truepositives and false positives.
macro
: Calculate metric for each label, and find theirunweighted mean. This does not take label imbalance into account.
weighted
: Calculate metric for each label, and find theiraverage weighted by the support (the number of true instances for each label). This alters βmacroβ to account for label imbalance.
zero_division (Literal["warn", 0, 1], default="warn") β Value to return when there is a zero division. If set to βwarnβ, this acts as 0, but warnings are also raised.
Examples
>>> from cyclops.evaluate.metrics import MultilabelPrecision >>> target = [[0, 1], [1, 1]] >>> preds = [[0.1, 0.9], [0.2, 0.8]] >>> metric = MultilabelPrecision(num_labels=2, average=None) >>> metric(target, preds) array([0., 1.]) >>> metric.reset_state() >>> target = [[[0, 1], [1, 1]], [[1, 1], [1, 0]]] >>> preds = [[[0.1, 0.7], [0.2, 0.8]], [[0.5, 0.9], [0.3, 0.4]]] >>> for t, p in zip(target, preds): ... metric.update_state(t, p) >>> metric.compute() array([1., 1.])
Functional Interface#
precision#
- precision(target, preds, task, pos_label=1, num_classes=None, threshold=0.5, top_k=None, num_labels=None, average=None, zero_division='warn')[source]#
Compute precision score for different classification tasks.
Precision is the ratio of correctly predicted positive observations to the total predicted positive observations.
- Parameters:
target (npt.ArrayLike) β Ground truth (correct) target values.
preds (npt.ArrayLike) β Predictions as returned by a classifier.
task (Literal["binary", "multiclass", "multilabel"]) β Task type.
pos_label (int) β Label of the positive class. Only used for binary classification.
num_classes (Optional[int]) β Number of classes. Only used for multiclass classification.
threshold (float) β Threshold for positive class predictions. Default is 0.5.
top_k (Optional[int]) β Number of highest probability or logits predictions to consider when computing multiclass or multilabel metrics. Default is None.
num_labels (Optional[int]) β Number of labels. Only used for multilabel classification.
average (Literal["micro", "macro", "weighted", None]) β
Average to apply. If None, return scores for each class. Default is None. One of:
micro
: Calculate metrics globally by counting the total truepositives and and false positives.
macro
: Calculate metrics for each label/class, and find theirunweighted mean. This does not take label imbalance into account.
weighted
: Calculate metrics for each label, and find theiraverage weighted by support (the number of true instances for each label). This alters
macro
to account for label imbalance.
zero_division (Literal["warn", 0, 1]) β Value to return when there are no true positives or true negatives. If set to
warn
, this acts as 0, but warnings are also raised.
- Returns:
precision_score β Precision score. If
average
is not None or task isbinary
, return a float. Otherwise, return a numpy.ndarray of precision scores for each class/label.- Return type:
- Raises:
ValueError β If task is not one of
binary
,multiclass
ormultilabel
.
Examples
>>> # (binary) >>> from cyclops.evaluate.metrics.functional import precision >>> target = [0, 1, 1, 0] >>> preds = [0.1, 0.9, 0.8, 0.3] >>> precision(target, preds, task="binary") 1.0
>>> # (multiclass) >>> from cyclops.evaluate.metrics.functional import precision >>> target = [0, 1, 2, 0, 1, 2] >>> preds = [ ... [0.1, 0.6, 0.3], ... [0.05, 0.95, 0], ... [0.1, 0.8, 0.1], ... [0.5, 0.3, 0.2], ... [0.2, 0.5, 0.3], ... [0.2, 0.2, 0.6], ... ] >>> precision(target, preds, task="multiclass", num_classes=3, average="macro") 0.8333333333333334
>>> # (multilabel) >>> from cyclops.evaluate.metrics.functional import precision >>> target = [[0, 1], [1, 1]] >>> preds = [[0.1, 0.9], [0.2, 0.8]] >>> precision(target, preds, task="multilabel", num_labels=2, average="macro") 0.5
binary_precision#
- binary_precision(target, preds, pos_label=1, threshold=0.5, zero_division='warn')[source]#
Compute precision score for binary classification.
- Parameters:
target (npt.ArrayLike) β Ground truth (correct) target values.
preds (npt.ArrayLike) β Predictions as returned by a classifier.
pos_label (int, default=1) β The label of the positive class.
threshold (float, default=0.5) β Threshold for deciding the positive class.
zero_division (Literal["warn", 0, 1], default="warn") β Value to return when there is a zero division. If set to βwarnβ, this acts as 0, but warnings are also raised.
- Returns:
Precision score.
- Return type:
Examples
>>> from cyclops.evaluate.metrics.functional import binary_precision >>> target = [0, 1, 0, 1] >>> preds = [0, 1, 1, 1] >>> binary_precision(target, preds) 0.6666666666666666 >>> target = [0, 1, 0, 1, 0, 1] >>> preds = [0.11, 0.22, 0.84, 0.73, 0.33, 0.92] >>> binary_precision(target, preds) 0.6666666666666666
multiclass_precision#
- multiclass_precision(target, preds, num_classes, top_k=None, average=None, zero_division='warn')[source]#
Compute precision score for multiclass classification tasks.
- Parameters:
target (npt.ArrayLike) β Ground truth (correct) target values.
preds (npt.ArrayLike) β Predictions as returned by a classifier.
num_classes (int) β Number of classes in the dataset.
top_k (int, optional) β If given, and predictions are probabilities/logits, the precision will be computed only for the top k classes. Otherwise,
top_k
will be set to 1.average (Literal["micro", "macro", "weighted", None], default=None) β
If
None
, return the precision score for each class. Otherwise, use one of the following options to compute the average precision score:micro
: Calculate metric globally from the total count of truepositives and false positives.
macro
: Calculate metric for each class, and find their unweightedmean. This does not take label imbalance into account.
weighted
: Calculate metric for each class, and find their averageweighted by the support (the number of true instances for each class). This alters βmacroβ to account for class imbalance.
zero_division (Literal["warn", 0, 1], default="warn") β Value to return when there is a zero division. If set to βwarnβ, this acts as 0, but warnings are also raised.
- Returns:
precision β Precision score. If
average
is None, return a numpy.ndarray of precision scores for each class.- Return type:
- Raises:
ValueError β If
average
is not one ofmicro
,macro
,weighted
orNone
.
Examples
>>> from cyclops.evaluate.metrics.functional import multiclass_precision >>> target = [0, 1, 2, 0] >>> preds = [0, 2, 1, 0] >>> multiclass_precision(target, preds, num_classes=3) array([1., 0., 0.])
multilabel_precision#
- multilabel_precision(target, preds, num_labels, threshold=0.5, top_k=None, average=None, zero_division='warn')[source]#
Compute precision score for multilabel classification tasks.
The input is expected to be an array-like of shape (N, L), where N is the number of samples and L is the number of labels. The input is expected to be a binary array-like, where 1 indicates the presence of a label and 0 indicates its absence.
- Parameters:
target (npt.ArrayLike) β Ground truth (correct) target values.
preds (npt.ArrayLike) β Predictions as returned by a classifier.
num_labels (int) β Number of labels for the task.
threshold (float, default=0.5) β Threshold for deciding the positive class.
top_k (int, optional) β If given, and predictions are probabilities/logits, the precision will be computed only for the top k classes. Otherwise,
top_k
will be set to 1.average (Literal["micro", "macro", "weighted", None], default=None) β
If
None
, return the precision score for each label. Otherwise, use one of the following options to compute the average precision score:micro
: Calculate metric globally from the total count of truepositives and false positives.
macro
: Calculate metric for each label, and find their unweightedmean. This does not take label imbalance into account.
weighted
: Calculate metric for each label, and find their averageweighted by the support (the number of true instances for each label). This alters βmacroβ to account for label imbalance.
zero_division (Literal["warn", 0, 1], default="warn") β Value to return when there is a zero division. If set to βwarnβ, this acts as 0, but warnings are also raised.
- Returns:
precision β Precision score. If
average
is None, return a numpy.ndarray of precision scores for each label.- Return type:
- Raises:
ValueError β If average is not one of
micro
,macro
,weighted
, orNone
.
Examples
>>> from cyclops.evaluate.metrics.functional import multilabel_precision >>> target = [[0, 1], [1, 1]] >>> preds = [[0.1, 0.9], [0.2, 0.8]] >>> multilabel_precision(target, preds, num_labels=2) array([0., 1.])