How is your model doing?


A quick glance of your most important metrics.

Last 0 Evaluations

Accuracy
The proportion of all instances that are correctly predicted.
0.8 0.7
minimum
threshold
Precision
The proportion of predicted positive instances that are correctly predicted.
0.36 0.7
minimum
threshold
Recall
The proportion of actual positive instances that are correctly predicted. Also known as recall or true positive rate.
0.2 0.7
minimum
threshold
F1 Score
The harmonic mean of precision and recall.
0.43 0.7
minimum
threshold
AUROC
The area under the receiver operating characteristic curve (AUROC) is a measure of the performance of a binary classification model.
0.71 0.7
minimum
threshold
Average Precision
The area under the precision-recall curve (AUPRC) is a measure of the performance of a binary classification model.
0.52 0.7
minimum
threshold

Last 0 Evaluations

How is your model doing over time?


See how your model is performing over several metrics and subgroups over time.

Multi-plot Selection:

Metrics

The moving average of all data points.
A measure of how dispersed the data points are in relation to the mean.
The proportion of all instances that are correctly predicted.
The proportion of predicted positive instances that are correctly predicted.
The proportion of actual positive instances that are correctly predicted. Also known as recall or true positive rate.
The harmonic mean of precision and recall.
The area under the receiver operating characteristic curve (AUROC) is a measure of the performance of a binary classification model.
The area under the precision-recall curve (AUPRC) is a measure of the performance of a binary classification model.

age

gender

Datasets


Graphics

Quantitative Analysis


Accuracy
The proportion of all instances that are correctly predicted.
0.8 0.7
minimum
threshold
Precision
The proportion of predicted positive instances that are correctly predicted.
0.36 0.7
minimum
threshold
Recall
The proportion of actual positive instances that are correctly predicted. Also known as recall or true positive rate.
0.2 0.7
minimum
threshold
F1 Score
The harmonic mean of precision and recall.
0.43 0.7
minimum
threshold
AUROC
The area under the receiver operating characteristic curve (AUROC) is a measure of the performance of a binary classification model.
0.71 0.7
minimum
threshold
Average Precision
The area under the precision-recall curve (AUPRC) is a measure of the performance of a binary classification model.
0.52 0.7
minimum
threshold

Graphics

Fairness Analysis


Graphics

Model Details


Description

The model was trained on the MIMICIV dataset to predict risk of in-hospital mortality.

Version

  • Date: 2024-07-16
    Initial Release
    Version: 0.0.1

Owners

  • Name: CyclOps Team
    Contact: vectorinstitute.github.io/cyclops/
    Email: cyclops@vectorinstitute.ai

Licenses

  • Identifier: Apache-2.0

Name

Mortality Prediction Model

Model Parameters


Learning_rate

0.1

Gamma

1

Colsample_bytree

0.7

Reg_lambda

0

Missing

nan

Max_depth

5

Seed

123

Enable_categorical

False

Eval_metric

logloss

Objective

binary:logistic

N_estimators

500

Random_state

123

Min_child_weight

3

Considerations


Users

  • Hospitals
  • Clinicians
  • ML Engineers

Use Cases

  • Predicting prolonged length of stay
    Kind: primary

Fairness Assessment

  • Affected Group: sex, age
    Benefits: Improved health outcomes for patients.
    Harms: Biased predictions for patients in certain groups (e.g. older patients) may lead to worse health outcomes.
    We will monitor the performance of the model on these groups and retrain the model if the performance drops below a certain threshold.

Ethical Considerations

  • The model should be continuously monitored for performance and retrained if the performance drops below a certain threshold.
    Risk: The model may be used to make decisions that affect the health of patients.