How is your model doing?


A quick glance of your most important metrics.

Last 3 Evaluations

Accuracy
The proportion of all instances that are correctly predicted.
0.33 0.7
minimum
threshold
00:00Jan 8, 202012:0000:00Jan 9, 202012:0000:00Jan 10, 202000.51
Precision
The proportion of predicted positive instances that are correctly predicted.
0.0 0.7
minimum
threshold
00:00Jan 8, 202012:0000:00Jan 9, 202012:0000:00Jan 10, 202000.51
Recall
The proportion of actual positive instances that are correctly predicted. Also known as recall or true positive rate.
0.67 0.7
minimum
threshold
00:00Jan 8, 202012:0000:00Jan 9, 202012:0000:00Jan 10, 202000.51
F1 Score
The harmonic mean of precision and recall.
0.0 0.7
minimum
threshold
00:00Jan 8, 202012:0000:00Jan 9, 202012:0000:00Jan 10, 202000.51
AUROC
The area under the receiver operating characteristic curve (AUROC) is a measure of the performance of a binary classification model.
0.26 0.7
minimum
threshold
00:00Jan 8, 202012:0000:00Jan 9, 202012:0000:00Jan 10, 202000.51

Last 3 Evaluations

How is your model doing over time?


See how your model is performing over several metrics and subgroups over time.

Multi-plot Selection:

Metrics

The moving average of all data points.
A measure of how dispersed the data points are in relation to the mean.
The proportion of predicted positive instances that are correctly predicted.
The proportion of actual positive instances that are correctly predicted. Also known as recall or true positive rate.
The harmonic mean of precision and recall.
The area under the receiver operating characteristic curve (AUROC) is a measure of the performance of a binary classification model.
The proportion of all instances that are correctly predicted.

Age

00:00Jan 8, 202012:0000:00Jan 9, 202012:0000:00Jan 10, 202000.20.40.60.81
Current Precision is trending downwardsand is below the threshold.

Datasets


Quantitative Analysis


Model Details


Considerations