Can Multiple Responses from an LLM Reveal the Sources of Its Uncertainty?

Yang Nan, Pengfei He, Ravi Tandon, Han Xu


Abstract
Large language models (LLMs) have delivered significant breakthroughs across diverse domains but can still produce unreliable or misleading outputs, posing critical challenges for real-world applications. While many recent studies focus on quantifying model uncertainty, relatively little work has been devoted to diagnosing the source of uncertainty. In this study, we show that, when an LLM is uncertain, the patterns of disagreement among its multiple generated responses contain rich clues about the underlying cause of uncertainty. To illustrate this point, we collect multiple responses from a target LLM and employ an auxiliary LLM to analyze their patterns of disagreement. The auxiliary model is tasked to reason about the likely source of uncertainty, such as whether it stems from ambiguity in the input question, a lack of relevant knowledge, or both. In cases involving knowledge gaps, the auxiliary model also identifies the specific missing facts or concepts contributing to the uncertainty. In our experiment, we validate our framework on AmbigQA, OpenBookQA, and MMLU-Pro, confirming its generality in diagnosing distinct uncertainty sources. Such diagnosis shows the potential for relevant manual interventions that improve LLM performance and reliability.
Anthology ID:
2025.findings-emnlp.841
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2025
Month:
November
Year:
2025
Address:
Suzhou, China
Editors:
Christos Christodoulopoulos, Tanmoy Chakraborty, Carolyn Rose, Violet Peng
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
15551–15569
Language:
URL:
https://preview.aclanthology.org/author-page-yu-wang-polytechnic/2025.findings-emnlp.841/
DOI:
10.18653/v1/2025.findings-emnlp.841
Bibkey:
Cite (ACL):
Yang Nan, Pengfei He, Ravi Tandon, and Han Xu. 2025. Can Multiple Responses from an LLM Reveal the Sources of Its Uncertainty?. In Findings of the Association for Computational Linguistics: EMNLP 2025, pages 15551–15569, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
Can Multiple Responses from an LLM Reveal the Sources of Its Uncertainty? (Nan et al., Findings 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/author-page-yu-wang-polytechnic/2025.findings-emnlp.841.pdf
Checklist:
 2025.findings-emnlp.841.checklist.pdf