How is your model doing?
A quick glance of your most important metrics.
Last 0 Evaluations
Accuracy
The proportion of all instances that are correctly predicted.
0.8
▲
0.7
minimum
threshold
minimum
threshold
Precision
The proportion of predicted positive instances that are correctly predicted.
0.36
▼
0.7
minimum
threshold
minimum
threshold
Recall
The proportion of actual positive instances that are correctly predicted. Also known as recall or true positive rate.
0.2
▼
0.7
minimum
threshold
minimum
threshold
F1 Score
The harmonic mean of precision and recall.
0.43
▼
0.7
minimum
threshold
minimum
threshold
AUROC
The area under the receiver operating characteristic curve (AUROC) is a measure of the performance of a binary classification model.
0.71
▲
0.7
minimum
threshold
minimum
threshold
Average Precision
The area under the precision-recall curve (AUPRC) is a measure of the performance of a binary classification model.
0.52
▼
0.7
minimum
threshold
minimum
threshold
Last 0 Evaluations
How is your model doing over time?
See how your model is performing over several metrics and subgroups over time.
Multi-plot Selection:
Metrics
age
gender
Datasets
Graphics
Quantitative Analysis
Accuracy
The proportion of all instances that are correctly predicted.
0.8
▲
0.7
minimum
threshold
minimum
threshold
Precision
The proportion of predicted positive instances that are correctly predicted.
0.36
▼
0.7
minimum
threshold
minimum
threshold
Recall
The proportion of actual positive instances that are correctly predicted. Also known as recall or true positive rate.
0.2
▼
0.7
minimum
threshold
minimum
threshold
F1 Score
The harmonic mean of precision and recall.
0.43
▼
0.7
minimum
threshold
minimum
threshold
AUROC
The area under the receiver operating characteristic curve (AUROC) is a measure of the performance of a binary classification model.
0.71
▲
0.7
minimum
threshold
minimum
threshold
Average Precision
The area under the precision-recall curve (AUPRC) is a measure of the performance of a binary classification model.
0.52
▼
0.7
minimum
threshold
minimum
threshold
Graphics
Fairness Analysis
Graphics
Model Details
Description
The model was trained on the MIMICIV dataset to predict risk of in-hospital mortality.Version
-
Date: 2024-07-16
Initial Release
Version: 0.0.1
Owners
-
Name: CyclOps Team
Contact: vectorinstitute.github.io/cyclops/
Email: cyclops@vectorinstitute.ai
Licenses
-
Identifier: Apache-2.0
Name
Mortality Prediction ModelModel Parameters
Learning_rate
0.1Gamma
1Colsample_bytree
0.7Reg_lambda
0Missing
nanMax_depth
5Seed
123Enable_categorical
FalseEval_metric
loglossObjective
binary:logisticN_estimators
500Random_state
123Min_child_weight
3Considerations
Users
- Hospitals
- Clinicians
- ML Engineers
Use Cases
-
Predicting prolonged length of stay
Kind: primary
Fairness Assessment
-
Affected Group: sex, age
Benefits: Improved health outcomes for patients.
Harms: Biased predictions for patients in certain groups (e.g. older patients) may lead to worse health outcomes.
We will monitor the performance of the model on these groups and retrain the model if the performance drops below a certain threshold.
Ethical Considerations
-
The model should be continuously monitored for performance and retrained if the performance drops below a certain threshold.
Risk: The model may be used to make decisions that affect the health of patients.