Bilgin Osmanodja
2025
One Size Fits None: Rethinking Fairness in Medical AI
Roland Roller
|
Michael Hahn
|
Ajay Madhavan Ravichandran
|
Bilgin Osmanodja
|
Florian Oetke
|
Zeineb Sassi
|
Aljoscha Burchardt
|
Klaus Netter
|
Klemens Budde
|
Anne Herrmann
|
Tobias Strapatsas
|
Peter Dabrock
|
Sebastian Möller
Proceedings of the 6th Workshop on Gender Bias in Natural Language Processing (GeBNLP)
Machine learning (ML) models are increasingly used to support clinical decision-making. However, real-world medical datasets are often noisy, incomplete, and imbalanced, leading to performance disparities across patient subgroups. These differences raise fairness concerns, particularly when they reinforce existing disadvantages for marginalized groups. In this work, we analyze several medical prediction tasks and demonstrate how model performance varies with patient characteristics. While ML models may demonstrate good overall performance, we argue that subgroup-level evaluation is essential before integrating them into clinical workflows. By conducting a performance analysis at the subgroup level, differences can be clearly identified—allowing, on the one hand, for performance disparities to be considered in clinical practice, and on the other hand, for these insights to inform the responsible development of more effective models. Thereby, our work contributes to a practical discussion around the subgroup-sensitive development and deployment of medical ML models and the interconnectedness of fairness and transparency.
2022
An Annotated Corpus of Textual Explanations for Clinical Decision Support
Roland Roller
|
Aljoscha Burchardt
|
Nils Feldhus
|
Laura Seiffe
|
Klemens Budde
|
Simon Ronicke
|
Bilgin Osmanodja
Proceedings of the Thirteenth Language Resources and Evaluation Conference
In recent years, machine learning for clinical decision support has gained more and more attention. In order to introduce such applications into clinical practice, a good performance might be essential, however, the aspect of trust should not be underestimated. For the treating physician using such a system and being (legally) responsible for the decision made, it is particularly important to understand the system’s recommendation. To provide insights into a model’s decision, various techniques from the field of explainability (XAI) have been proposed whose output is often enough not targeted to the domain experts that want to use the model. To close this gap, in this work, we explore how explanations could possibly look like in future. To this end, this work presents a dataset of textual explanations in context of decision support. Within a reader study, human physicians estimated the likelihood of possible negative patient outcomes in the near future and justified each decision with a few sentences. Using those sentences, we created a novel corpus, annotated with different semantic layers. Moreover, we provide an analysis of how those explanations are constructed, and how they change depending on physician, on the estimated risk and also in comparison to an automatic clinical decision support system with feature importance.