Ajay Madhavan Ravichandran
2025
One Size Fits None: Rethinking Fairness in Medical AI
Roland Roller
|
Michael Hahn
|
Ajay Madhavan Ravichandran
|
Bilgin Osmanodja
|
Florian Oetke
|
Zeineb Sassi
|
Aljoscha Burchardt
|
Klaus Netter
|
Klemens Budde
|
Anne Herrmann
|
Tobias Strapatsas
|
Peter Dabrock
|
Sebastian Möller
Proceedings of the 6th Workshop on Gender Bias in Natural Language Processing (GeBNLP)
Machine learning (ML) models are increasingly used to support clinical decision-making. However, real-world medical datasets are often noisy, incomplete, and imbalanced, leading to performance disparities across patient subgroups. These differences raise fairness concerns, particularly when they reinforce existing disadvantages for marginalized groups. In this work, we analyze several medical prediction tasks and demonstrate how model performance varies with patient characteristics. While ML models may demonstrate good overall performance, we argue that subgroup-level evaluation is essential before integrating them into clinical workflows. By conducting a performance analysis at the subgroup level, differences can be clearly identified—allowing, on the one hand, for performance disparities to be considered in clinical practice, and on the other hand, for these insights to inform the responsible development of more effective models. Thereby, our work contributes to a practical discussion around the subgroup-sensitive development and deployment of medical ML models and the interconnectedness of fairness and transparency.
2024
XAI for Better Exploitation of Text in Medical Decision Support
Ajay Madhavan Ravichandran
|
Julianna Grune
|
Nils Feldhus
|
Aljoscha Burchardt
|
Roland Roller
|
Sebastian Möller
Proceedings of the 23rd Workshop on Biomedical Natural Language Processing
In electronic health records, text data is considered a valuable resource as it complements a medical history and may contain information that cannot be easily included in tables. But why does the inclusion of clinical texts as additional input into multimodal models, not always significantly improve the performance of medical decision-support systems? Explainable AI (XAI) might provide the answer. We examine which information in text and structured data influences the performance of models in the context of multimodal decision support for biomedical tasks. Using data from an intensive care unit and targeting a mortality prediction task, we compare information that has been considered relevant by XAI methods to the opinion of a physician.
Search
Fix author
Co-authors
- Aljoscha Burchardt 2
- Sebastian Möller 2
- Roland Roller 2
- Klemens Budde 1
- Peter Dabrock 1
- show all...