LLMs as Medical Safety Judges: Evaluating Alignment with Human Annotation in Patient-Facing QA
Yella Diekmann, Chase Fensore, Rodrigo Carrillo-Larco, Eduard Castejon Rosales, Sakshi Shiromani, Rima Pai, Megha Shah, Joyce Ho
Abstract
The increasing deployment of LLMs in patient-facing medical QA raises concerns about the reliability and safety of their responses. Traditional evaluation methods rely on expert human annotation, which is costly, time-consuming, and difficult to scale. This study explores the feasibility of using LLMs as automated judges for medical QA evaluation. We benchmark LLMs against human annotators across eight qualitative safety metrics and introduce adversarial question augmentation to assess LLMs’ robustness in evaluating medical responses. Our findings reveal that while LLMs achieve high accuracy in objective metrics such as scientific consensus and grammaticality, they struggle with more subjective categories like empathy and extent of harm. This work contributes to the ongoing discussion on automating safety assessments in medical AI and informs the development of more reliable evaluation methodologies.- Anthology ID:
- 2025.bionlp-1.19
- Volume:
- ACL 2025
- Month:
- August
- Year:
- 2025
- Address:
- Viena, Austria
- Editors:
- Dina Demner-Fushman, Sophia Ananiadou, Makoto Miwa, Junichi Tsujii
- Venues:
- BioNLP | WS
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 217–224
- Language:
- URL:
- https://preview.aclanthology.org/acl25-workshop-ingestion/2025.bionlp-1.19/
- DOI:
- Cite (ACL):
- Yella Diekmann, Chase Fensore, Rodrigo Carrillo-Larco, Eduard Castejon Rosales, Sakshi Shiromani, Rima Pai, Megha Shah, and Joyce Ho. 2025. LLMs as Medical Safety Judges: Evaluating Alignment with Human Annotation in Patient-Facing QA. In ACL 2025, pages 217–224, Viena, Austria. Association for Computational Linguistics.
- Cite (Informal):
- LLMs as Medical Safety Judges: Evaluating Alignment with Human Annotation in Patient-Facing QA (Diekmann et al., BioNLP 2025)
- PDF:
- https://preview.aclanthology.org/acl25-workshop-ingestion/2025.bionlp-1.19.pdf