Odyssey Toolkit Image

EHRMamba

Towards Generalizable and Scalable Foundation Models for Electronic Health Records

1 Vector Institute  Vector Institute       2 University of Toronto University of Toronto

Abstract

Transformers have significantly advanced the modeling of Electronic Health Records (EHR), yet their deployment in real-world healthcare is limited by several key challenges. Firstly, the quadratic computational cost and insufficient context length of these models pose significant obstacles for hospitals in processing the extensive medical histories typical in EHR data. Additionally, existing models employ separate finetuning for each clinical task, complicating maintenance in healthcare environments. Moreover, these models focus exclusively on either clinical prediction or EHR forecasting, lacking the flexibility to perform well across both. To overcome these limitations, we introduce EHRMamba, a robust foundation model built on the Mamba architecture. EHRMamba can process sequences up to four times longer than previous models due to its linear computational cost. We also introduce a novel approach to Multitask Prompted Finetuning (MPF) for EHR data, which enables EHRMamba to simultaneously learn multiple clinical tasks in a single finetuning phase, enhancing deployment & cross-task generalization. Furthermore, our model leverages the HL7 FHIR data standard to simplify integration into existing hospital systems. Alongside EHRMamba, we open-source Odyssey, a toolkit designed to support the development & deployment of EHR foundation models, with an emphasis on data standardization & interpretability. Our evaluations on the MIMIC-IV dataset demonstrate that EHRMamba advances state-of-the-art performance across 6 major clinical tasks and excels in EHR forecasting, marking a significant leap forward in the field.

Introduction

Personalized medicine is the pinnacle of healthcare innovation, and AI represents a promising solution. Central to this revolution are Electronic Health Records (EHR), which document the entire medical histories of patients in hospital visits. With over 80% of hospitals in the US and Canada adopting EHR systems, this extensive data provides an unparalleled resource for training EHR foundation models. These models hold the potential to personalize treatment plans, uncover disease patterns, detect the onset of rare illnesses, and enhance clinical predictions

Transformer-based models, have demonstrated remarkable capabilities in modeling EHR data. However, their translation to real-world clinical settings remains an open challenge. Existing models often prioritize research performance and may not adequately consider the practical constraints faced by hospitals for deploying such large-scale models. These include limited computational resources, data privacy regulations mandating on-premise deployments, and the need for flexible models that generalize to new tasks and integrate seamlessly with existing healthcare infrastructure.

Computational Constraints: Deploying Transformer-based models in hospital settings is significantly challenged by the length of EHR data. The quadratic scaling of computational and memory requirements becomes prohibitive when sequences span tens of thousands of tokens to capture an entire medical history. However, the computational resources required for such comprehensive analysis often far exceed what is available in many hospitals.

Finetuning Overhead: Finetuning EHR models for each clinical predictive task, such as mortality prediction, involves creating and maintaining separate copies of the base pretrained model for each task, leading to the management of multiple specialized models in hospitals. This multiplicity demands considerable resources and requires initiating each finetuning process from the base model. Moreover, finetuning task-specific models in isolation hinders their ability to integrate insights across different tasks, impairing the reliability and performance of the system.

Contributions

To overcome these limitations, we propose EHRMamba, a robust foundation model based on Mamba and designed for scalable, deployable, and generalizable EHR modeling. EHRMamba introduces several key contributions:

Scalability. EHRMamba reduces computational and memory demands to a linear scale during inference while enabling large-scale training through parallel processing. It extends the context length fourfold compared to previous transformer-based models, enabling the processing of longer EHR sequences and the inclusion of more comprehensive information in the sequence.

Multitask Prompted Finetuning (MPF). We train EHRMamba using a variant of MPF for EHR data, allowing simultaneous learning of multiple clinical predictive tasks within a single finetuning phase. This approach enhances cross-task generalization, supports the learning of new tasks without modifying the model architecture, and simplifies real-world deployment in hospitals.

Dual Competence. EHRMamba is the first model to perform both EHR forecasting, predicting future data in EHR sequences, and clinical predictive modeling, predicting patient outcomes such as mortality. This dual functionality enables comprehensive disease pattern forecasting and personalized prediction timelines, facilitating tailored treatment plans based on individual patient trajectories.

Odyssey. EHRMamba is built using Odyssey, a toolkit designed to facilitate the development and deployment of EHR foundation models. Odyssey supports gathering and processing EHR sequences using the HL7 Fast Healthcare Interoperability Resources (FHIR) standard, which simplifies integration into existing hospital systems due to its widespread adoption in healthcare settings. See Appendix A for more information on Odyssey.

We assess EHRMamba on 6 clinical predictive tasks using the MIMIC-IV dataset. Our results show that EHRMamba achieves state-of-the-art performance while operating with significant memory and computational efficiency. Additionally, we present a patient case study highlighting EHR forecasting capabilities & interpretability methods.

Data Representation

Patient data in EHR sequences are represented as a time series of event tokens, with each event capturing a medical occurrence. These sequences begin with a [CLS] token and include a series of visits marked by [VS] (visit start) and [VE] (visit end) tokens. Time intervals between visits are indicated by special tokens, such as [W2] for two weeks, and a [REG] token follows each visit end. Each event token is enriched with attributes like type (procedure, medication, lab result), age at the event, exact timestamp, visit segment, visit order, and position within the sequence. These attributes are mapped to distinct token spaces, and their embeddings are combined with the event token embeddings. The comprehensive embedding scheme integrates concept, type, age, time, segment, visit order, and positional embeddings, providing a rich temporal and contextual representation of patient data.

Data

EHRMamba

Architecture. The EHRMamba architecture is designed to optimize EHR modeling through a specialized embedding layer, multiple Mamba blocks, and custom heads for various tasks. The embedding layer maps input sequences to embedded inputs using information described in data representation. Stacked Mamba blocks form the core of the architecture, functioning as sequence-to-sequence modules that preserve input and output dimensions and map input embeddings to an output tensor. A key feature of EHRMamba is its adaptability to both forecasting and clinical prediction tasks, with distinct heads tailored for each.

Pretraining. EHRMamba undergoes pretraining using Next Token Prediction (NTP) to predict future events in patient sequences. This phase focuses on learning general temporal patterns and dynamics from unlabeled EHR data, preparing the model for more specific tasks.

Finetuning. In the finetuning stage, EHRMamba adapts the knowledge gained during pretraining to specific clinical tasks. This involves using a smaller, labeled dataset to optimize the model for predicting specific clinical outcomes, enhancing its precision and reliability in real-world applications.

Multitask Prompted Finetuning (MPF). We introduce MPF for EHR, enabling a single finetuned model to efficiently handle multiple clinical tasks. By replacing the first ([CLS]) and last ([REG]) tokens of a patient sequence with task-specific tokens (e.g., [MOR] for mortality prediction), the model can use the same patient sequence data for various tasks. This approach embeds task-specific information at the input level, enhancing task-specific processing and generalization. MPF simplifies deployment and maintenance by reducing the need for multiple classification heads, streamlining the addition of new tasks, and ensuring compatibility with frameworks like HuggingFace.

EHRMamba

Experimental Setup

Dataset. We evaluate EHRMamba on MIMIC-IV, a real-world, publicly available EHR dataset from Beth Israel Deaconess Medical Center. It includes records from over 431,000 visits and 180,000 patients, featuring detailed temporal information on medical events such as procedures, medications, and lab results, along with demographic information such as age.

Clinical Predictive Tasks. We assess model performance on 6 primary clinical predictive binary classification tasks: (1) Mortality Prediction, predicting whether a patient will pass away within one month after hospital discharge; (2) Length of Stay Prediction, estimating whether a patient’s hospitalization will exceed one week based on the first 24 hours of admission; (3) Readmission Prediction, predicting the likelihood of a patient being readmitted within one month of the most recent discharge; (4) Condition 0 (Hypertension), predicting for specific diagnostic condition Hypertension; (5) Condition 1 (Fluid Disorders), predicting for specific diagnostic condition Fluid Disorders; (6) Condition 2 (Lipoid Metabolism Disorders), predicting for specific diagnostic condition Lipoid Metabolism Disorders.

Evaluation Metrics. We evaluate model performances using the Area Under the Receiver Operating Characteristic Curve (AUROC), the Area Under the Precision-Recall Curve (AUPRC), and the F1-Score metrics. We calculate averages and standard deviations by conducting experiments three times with randomized seeds, and use independent two-sample T-tests to assess statistical significance.

Baseline Models

We compare EHRMamba to 5 baseline models, which except for XGBoost, use the same embedding layer. Additionally, except MultiBird & EHRMamba, other models are trained or finetuned separately for each clinical task.

  • XGBoost. The input features of the XGBoost model are frequencies of tokens from the vocabulary, excluding any special tokens, along with the age of patients in their first and last visits.
  • LSTM. This is a standard bi-directional LSTM model.
  • CEHR-BERT. An adaptation of the BERT architecture for EHR data that introduced the idea of incorporating temporal information using time embeddings and special time interval tokens. CEHR-BERT outperformed prior clinical BERT adaptations in various tasks including predicting patient mortality, hospital readmission, and several disease diagnoses.
  • BigBird Transformer. The BigBird Transformer, a variant of the BERT model, employs a form of attention known as block sparse attention. This modification allows for more efficient memory usage, facilitating the processing of longer EHR sequences. Here, we use a vanilla BigBird model with a context length of 2048 tokens, 4x greater than that of the CEHR-BERT model.
  • MultiBird Transformer. The MultiBird Transformer adopts the structural framework of the BigBird model but is trained using MPF, which enables a single finetuned model to excel across multiple downstream tasks. This training strategy is compatible with existing BigBird model implementations on HuggingFace, simplifying deployment of trained models.

Main Results

EHRMamba and MultiBird show a significant performance advantage over other models, due to their finetuning with MPF, which enhances their ability to integrate insights across multiple tasks. This is especially beneficial for complex tasks like readmission prediction and tasks with fewer data points like Condition 2. There is no substantial performance difference between BigBird and CEHR-BERT, underscoring the efficacy of block sparse attention over global attention. However, BigBird does outperform CEHR-BERT in condition prediction tasks, due to its longer context length. Notably, XGBoost struggles with tasks that require capturing temporal and sequential information, as it primarily processes token frequencies. Overall, EHRMamba outperforms MultiBird while also being far more memory and computational efficient, making it a superior choice for a wide range of EHR modeling objectives.

Main Results

Visualized Case Study

We present a case study of deceased Patient X below:

Interpretability. We use integrated gradients to assess the impact of each token in the EHR sequence on the clinical predictions. This method integrates the gradients of the model’s outputs with respect to each input token, from a baseline (such as all zeros) to the actual input. The right figure shows the average attribution scores for EHRMamba's positive predictions on the mortality task.

Forecasting. Using EHRMamba, we forecast the next event tokens in the sequence given the preceding tokens. The left figure compares some of these predictions with the actual events. Although the predicted tokens do not always match, they often represent relevant medical concepts.

Case Study

Odyssey Toolkit

Odyssey toolkit is designed to support the development and deployment of EHR foundation models:

Odyssey Toolkit GitHub

It includes 4 major modules:

  • data: This module includes scripts designed for gathering EHR datasets from HL7 FHIR resources. It handles the generation and processing of patient sequences for each clinical task, tokenizing the data, and creating the necessary data splits for model training. Additionally, it provides the dataset class used for training the models.
  • models: This module offers implementations for models used in this study, including XGBoost, LSTM, CEHR-BERT, BigBird, MultiBird, and EHRMamba. It also includes various embedding classes essential for the models.
  • evals: This module includes tools for testing models on different clinical prediction tasks and forecasting. It provides evaluation metrics that ensure a thorough assessment of model performance.
  • interp: This module contains methods for interpreting model decisions. It includes interactive visualization of the attention matrix for Transformer-based models, novel interpretability techniques for EHRMamba, and gradient attribution methods. These tools enhance the transparency and understanding of model decisions.
Odyssey

Conclusion

We introduced EHRMamba, a novel EHR foundation model based on Mamba that leverages Multitask Prompted Finetuning (MPF) to overcome the limitations of current transformer-based models. EHRMamba excels in handling long temporal sequences and learning multiple tasks simultaneously, achieving state-of-the-art performance on 6 clinical prediction tasks in the MIMIC-IV dataset. Additionally, we open-sourced the Odyssey toolkit, supporting the development and deployment of EHR models. EHRMamba significantly advances EHR modeling, offering a robust, scalable, and generalizable solution for improving patient outcomes and clinical decision-making.

BibTeX

@misc{fallahpour2024EHRMamba,
          title={EHRMamba: Towards Generalizable and Scalable Foundation Models for Electronic Health Records}, 
          author={Adibvafa Fallahpour and Mahshid Alinoori and Arash Afkanpour and Amrit Krishnan},
          year={2024},
          eprint={2405.14567},
          archivePrefix={arXiv},
          primaryClass={cs.LG}
    }