Reproducing and adapting a VA-trained AKI predictive model for a U-M hospital setting

Feature Stories

Reproducing and adapting a VA-trained AKI predictive model for a U-M hospital setting

Above: Extended Data Fig. 2 | Calibration of the original VA model a) VA test set b) UM test set. The calibration of the original model on the a) VA test set and b) UM test set. The predicted probabilities (deciles) are plotted against the observed probabilities with 95% confidence intervals. The diagonal line demonstrates the ideal calibration. The model calibration is examined for all patients (red), females only (green), and males only (blue).

In 2019, Google AI subsidiary DeepMind used a large dataset of VA patient records to develop a predictive model for acute kidney injury (AKI)—a potentially fatal condition whose prognosis improves the earlier a treatment intervention is administered. The DeepMind model purported to predict AKI 48 hours in advance, allowing ample lead time for clinicians to intervene and administer treatment.

The study reviewed electronic health records (EHR) data over a five-year period and included more than 700 thousand individuals. “This is a phenomenal model because it can predict AKI up to 48 hours in advance, in a continuous manner, and has the best model performance compared to all previously published models,” said Jie Cao, a PhD student in computational medicine and bioinformatics and researcher in Karandeep Singh’s ML4LHS Lab and Kayvan Najarian’s Biomedical & Clinical Informatics Lab.

“But concerns were raised about the generalizability of a model like this,” Cao continued, “given the predominantly male [VA] population that it was trained on.” This led Cao and her colleagues to evaluate the model’s generalizability in a non-VA, more sex-balanced population. Their findings have been published in Nature Machine Intelligence.

The researchers reconstructed aspects of the DeepMind model, then trained and validated this model on two cohorts: one comprising 278,813 VA hospitalizations (from 118 VA hospitals) and the other 165,359 U-M hospitalizations. Not surprisingly, given the 94% male population with which the original model was developed, the reconstructed model performed worse for female patients in both cohorts.

To mitigate the model’s sex-based discrepancies, researchers updated the model with data from U-M’s more sex-balanced patient population, which extended the original model from 160 decision trees to 170. This small extension improved performance in the U-M cohort both overall and between sexes.

“The extended model was successful at U-M. It used the VA model as the backbone, added information from U-M, and the final product worked well for the U-M patient population,” said Cao, lead author of the paper. “When researchers would like to benefit from the rich information contained in the original model and do not want to build a new local model from scratch, our study is a good example of how ‘fine-tuning’ could work when the original model was not trained on a diverse population,” she explained.

When the extended model was applied to VA patients, however, the discrepancies in model performance between males and females actually worsened.

“This finding surprised us to some extent,” said Cao, “but it is also reasonable and helps us understand the problem better. Difference in patient characteristics is one common factor contributing to model performance discrepancy. By matching the female patients at two different health systems and still finding discrepant model performance, we actually show that difference in patient characteristics wasn’t the only reason contributing to model performance discrepancy.”

Lower performance of the extended model in female VA patients, then, was not a factor of patient characteristics or low sample size, but likely attributable to variables such as differences in practice patterns between males and females at the VA.

Overall, the study demonstrated the value of updating existing models with data from the population to which the model will be applied.  “If a predictive model is to be taken out of one healthcare system and applied to another, the population the model was trained on is often different from the population it is going to be applied to. Even if the training population is diverse, we could observe a drop in model performance if nothing is done,” said Cao. “Our ‘extended model’ approach is to provide a solution to partially address this issue.”

To achieve peak performance, a model would, in theory, be applied only to a population matching the population it was trained on. But this is often not the case in practice. “In the real world,” said Cao, this approach is “infeasible due to limited resources, time, expertise, etc. Our ‘extended model’ strategy is a workaround in these scenarios.”

This research is significant, said Cao, because it shows “the complexity of discrepancies in model performance in subgroups that cannot be explained simply on the basis of sample size.” It also offers “a potential strategy to mitigate the generalizability issue,” she said, and, finally, it demonstrates “the importance of reproducing and evaluating artificial intelligence studies.”