Designing Bias-Aware AI Models to Address Health Disparities in Multi-Ethnic Healthcare Systems
DOI:
https://doi.org/10.5281/zenodo.15877694Keywords:
Bias-aware, AI models, health disparities, multi-ethnic, healthcare systems, algorithmic fairness, equity, predictive modeling, machine learning, healthcare bias, ethnic diversity, data fairness, medical AI, clinical decision-making, demographic variables, inclusive design, fairness metrics, health equity, protected attributes, treatment fairness, outcome disparities, ethical AI, diverse populations, personalized care, culturally competent AI, data-driven healthcare, algorithmic transparency, fairness-aware training, underrepresented groups, equitable healthcare.Abstract
The World Health Organization defines health disparities as differences in health outcomes that are avoidable, unfair, and unjust. To close these disparities in minority and lower-income communities, the development of artificial intelligence (AI) and machine learning (ML) algorithms to assist in healthcare systems has gained momentum. Such technology is becoming pervasive, being used for health screening, helping in clinical decision-making, and designing treatment strategies. When used to process large amounts of data from electronic health records, these algorithms can identify at-risk patients, suggest diagnoses, help design preventative strategies, and find cost-effective treatments. However, despite the rising importance of AI/ML and the potential for unintended amplification of existing bias, little work has proposed and laid out solutions for designing bias-aware while maximizing efficacy AI models that are specifically targeted towards addressing health disparities. In this paper, we discuss the particular challenges associated with addressing health disparities in the context of a multi-ethnic, multicultural, and multilingual United States healthcare system.
In particular, we present a framework for emphasizing bias in existing AI algorithms and for inclusion in the design of such algorithms. These challenges include, at the data level, the heterogeneity of the population with respect to ethnic, language, cultural, and socioeconomic status; data volume and tagging insufficiencies; class imbalance; and data-level privacy concerns. At the algorithmic level, challenges include a lack of awareness of the potential health impact of bias; the focus of some AI algorithms towards overall accuracy rather than fairness analysis; and algorithm insufficiency in explaining predictions to captured data. We layout a framework for nurturing a new generation of machine learning researchers: one with the cultural and ethnic training to design and embrace bias-aware algorithms that amplify the voice of the bias-bearers. Finally, we discuss possible future work areas, urging for impact-driven research.