Ahmed Al Mahrooqi


2025

pdf bib
Building Trust in Clinical LLMs: Bias Analysis and Dataset Transparency
Svetlana Maslenkova | Clement Christophe | Marco AF Pimentel | Tathagata Raha | Muhammad Umar Salman | Ahmed Al Mahrooqi | Avani Gupta | Shadab Khan | Ronnie Rajan | Praveenkumar Kanithi
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing

Large language models offer transformative potential for healthcare, yet their responsible and equitable development depends critically on a deeper understanding of how training data characteristics influence model behavior, including the potential for bias. Current practices in dataset curation and bias assessment often lack the necessary transparency, creating an urgent need for comprehensive evaluation frameworks to foster trust and guide improvements. In this study, we present an in-depth analysis of potential downstream biases in clinical language models, with a focus on differential opioid prescription tendencies across diverse demographic groups, such as ethnicity, gender, and age. As part of this investigation, we introduce HC4: Healthcare Comprehensive Commons Corpus, a novel and extensively curated pretraining dataset exceeding 89 billion tokens. Our evaluation leverages both established general benchmarks and a novel, healthcare-specific methodology, offering crucial insights to support fairness and safety in clinical AI applications.