Opportunity for Feedback: Principles To Address the Impact of Healthcare Algorithms on Racial and Ethnic Disparities in Health and Healthcare
The Agency for Healthcare Research and Quality (AHRQ) and the National Institute on Minority Health and Health Disparities (NIMHD) are hosting a meeting May 15 to seek feedback on draft guiding principles to prevent or mitigate bias related to healthcare algorithms that affect racial and ethnic disparities in healthcare.
Healthcare algorithms, often based on statistical or mathematical models including machine learning, can offer value in diagnostics and treatments. At the same time, algorithms can create and perpetuate inequities in healthcare. To address this issue, AHRQ and NIMHD hosted a 2-day meeting in March to present a draft of an AHRQ Evidence Report examining the published studies on this topic and learn from the field about the state of the science and current perspectives. The March meeting informed the development of a set of guiding principles by a diverse expert panel on preventing or mitigating bias related to healthcare algorithms that affect racial and ethnic disparities in healthcare. These draft principles will be shared with the public for feedback at the upcoming virtual meeting on May 15.
To register, visit Expert Panel on Racial Bias and Healthcare Algorithms: Public Meeting.
Detailed Project Background
Healthcare algorithms are frequently used to guide clinical decision making both at the point of care and as part of resource allocation and healthcare management. The evidence report defines algorithms as mathematical formulas and models that combine different input variables or factors to inform a calculation or an estimate—frequently an estimate of risk. Algorithms are often incorporated into healthcare decision tools, such as clinical guidelines, pathways, clinical decision support programs in electronic health records, and operational systems used by health systems and payers.
Use of algorithms is expanding in many realms of healthcare, from diagnostics and treatments to payer systems and business processes. Every sector of the healthcare system is testing the technology to improve patient outcomes, accelerate research, and reduce costs. Although algorithms are widely used and can offer value in diagnostics and treatments, not all individuals benefit equally from such algorithms, creating inequities. This is primarily due to biases that result in undue harm to marginalized populations, such as racial and ethnic minorities, and perpetuate healthcare disparities. Recognition of such disparities has motivated a growing call for clinical algorithms to be both trained and validated on diverse patient data, with representation across spectrums of sex, age, race, ethnicity, and more. To rectify these issues, the field needs to understand when leveraging algorithms leads to unintended biases, how to identify biases before implementation, and what to do with biases discovered after implementation.
In fall 2020, AHRQ received a congressional request to commission an evidence review examining the use of race and ethnicity within healthcare algorithms, the extent of their use and impact on health disparities, and potential solutions for mitigating racial and ethnic biases to improve disparities and outcomes for racial and ethnic minorities. AHRQ subsequently issued a Request for Information to solicit public input, and commissioned an evidence review through its Evidence-based Practice Center Program to review the literature on the topic.
Supplementing the evidence review activities, AHRQ and NIMHD are sponsoring a panel of diverse experts representing varied stakeholder perspectives to contribute to development of guiding principles and actionable solutions for the use of race and ethnicity within healthcare algorithms. The evidence review and its follow-on expert panel activities will support the field in recognizing the potential for algorithms to mitigate or amplify racial/ethnic bias, understanding how to identify and/or prevent biases before implementation, and understanding how to mitigate biases discovered after implementation.