$250,000 Awarded to Dr. Sherry Wang for AI/ML Patient Safety Research
October 1, 2024
Dr. Sherry Wang has been awarded a $250,000 grant from the NIH’s Artificial Intelligence/Machine Learning Consortium to Advance Health Equity and Researcher Diversity (AIM-AHEAD) program. Her proposal, “Ethics and Equity in Healthcare Artificial Intelligence (AI) application: Bias Assessment in Machine Learning (ML) models for opioid Prescribing”, will investigate potential biases in AI and ML systems used to treat patients with opioid use disorder.
Dr. Wang will serve as the PI of this project and will include Dr. Yuxin Wen (Fowler School of Engineering), Mr. Ivan Portillo (Leatherby Libraries), and a researcher from another institution as co-investigators. This project provides valuable learning and research opportunities for our PharmD student Peter Lim, undergraduate students Ryan Jewik and Nathan Watkins, and PhD student Melody Fewx, all of whom will play key roles in contributing to the study.
The study will investigate the extent to which existing AI and ML models used in clinical decision support (CDS) tools for opioid prescribing may cause or influence inequalities in patient care. While designed to improve clinical decision-making by predicting opioid-related outcomes such as overdose risk, these tools can mislead healthcare providers because of biases embedded in their algorithms.
Funded by NIH’s AIM-AHEAD Biomedical Research and Clinical Practice That Embodies Ethics and Equity (ABC-EE) program, the research will contribute to the broader goal of developing AI/ML methodologies that address bias detection and promote equity in healthcare AI applications.
The research will be driven by two main specific aims:
- Conduct a comprehensive review of opioid CDS systems and published ML models from 2010 to 2024, and use the Prediction Model Risk-of-Bias Assessment Tool (PROBAST) to identify potential algorithmic biases in four key areas: participants, predictors, outcomes, and analytical methodologies. These models will be classified as “high” or “low” risk for bias based on a set of 20 signaling questions.
- Adapt the NarxCare model, a CDS tool that integrates ML algorithms with Prescription Drug Monitoring Programs (PDMPs), to assess its fairness in the context of California’s PDMP data. NarxCare provides real-time insights into a patient’s potential risk of opioid overdose, however, its performance metrics, such as accuracy, may not capture inherent biases in patient treatment based on race, socioeconomic status, or geographic location.
Using fairness metrics, the research team will assess the model for both “group fairness” and “individual fairness” to identify potential inequities in care. Leveraging patient-level SDOH data, they will evaluate how well NarxCare treats different populations based on sensitive attributes like race or income level and find solutions for the tool to align with standards of equity.
The Stratification Tool for Opioid Risk Mitigation (STORM), a similar model used by the Veterans Health Administration (VHA), has already been found to exhibit racial bias and has negatively impacted patient care. There is a need for investigation and possible improvement in all ML-driven healthcare tools, as unequal access to healthcare resources and uneven diagnostic accuracy can lead to worsened health outcomes for underserved populations.