Uncertainty Quantification for Clinical Outcome Predictions with (Large) Language Models

Zizhang Chen, Peizhao Li, Xiaomeng Dong, Pengyu Hong


Abstract
To facilitate healthcare delivery, language models (LMs) have significant potential for clinical prediction tasks using electronic health records (EHRs). However, in these high-stakes applications, unreliable decisions can result in significant costs due to compromised patient safety and ethical concerns, thus increasing the need for good uncertainty modelling of automated clinical predictions. To address this, we consider uncertainty quantification of LMs for EHR tasks in both white-box and black-box settings. We first quantify uncertainty in white-box models, where we have access to model parameters and output logits. We show that an effective reduction of model uncertainty can be achieved by using the proposed multi-tasking and ensemble methods in EHRs. Continuing with this idea, we extend our approach to black-box settings, including popular proprietary LMs such as GPT-4. We validate our framework using longitudinal clinical data from over 6,000 patients across ten clinical prediction tasks. Results show that ensembling methods and multi-task prediction prompts reduce uncertainty across different scenarios. These findings increase model transparency in white-box and black-box settings, thereby advancing reliable AI healthcare.
Anthology ID:
2025.findings-naacl.419
Volume:
Findings of the Association for Computational Linguistics: NAACL 2025
Month:
April
Year:
2025
Address:
Albuquerque, New Mexico
Editors:
Luis Chiruzzo, Alan Ritter, Lu Wang
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
7512–7523
Language:
URL:
https://preview.aclanthology.org/fix-sig-urls/2025.findings-naacl.419/
DOI:
Bibkey:
Cite (ACL):
Zizhang Chen, Peizhao Li, Xiaomeng Dong, and Pengyu Hong. 2025. Uncertainty Quantification for Clinical Outcome Predictions with (Large) Language Models. In Findings of the Association for Computational Linguistics: NAACL 2025, pages 7512–7523, Albuquerque, New Mexico. Association for Computational Linguistics.
Cite (Informal):
Uncertainty Quantification for Clinical Outcome Predictions with (Large) Language Models (Chen et al., Findings 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/fix-sig-urls/2025.findings-naacl.419.pdf