A Survey of Uncertainty Estimation Methods on Large Language Models

Zhiqiu Xia, Jinxuan Xu, Yuqian Zhang, Hang Liu


Abstract
Large language models (LLMs) have demonstrated remarkable capabilities across various tasks. However, these models could offer biased, hallucinated, or non-factual responses camouflaged by their fluency and realistic appearance. Uncertainty estimation is the key method to address this challenge. While research efforts in uncertainty estimation are ramping up, there is a lack of comprehensive and dedicated surveys on LLM uncertainty estimation. This survey presents four major avenues of LLM uncertainty estimation. Furthermore, we perform extensive experimental evaluations across multiple methods and datasets. At last, we provide critical and promising future directions for LLM uncertainty estimation.
Anthology ID:
2025.findings-acl.1101
Volume:
Findings of the Association for Computational Linguistics: ACL 2025
Month:
July
Year:
2025
Address:
Vienna, Austria
Editors:
Wanxiang Che, Joyce Nabende, Ekaterina Shutova, Mohammad Taher Pilehvar
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
21381–21396
Language:
URL:
https://preview.aclanthology.org/display_plenaries/2025.findings-acl.1101/
DOI:
Bibkey:
Cite (ACL):
Zhiqiu Xia, Jinxuan Xu, Yuqian Zhang, and Hang Liu. 2025. A Survey of Uncertainty Estimation Methods on Large Language Models. In Findings of the Association for Computational Linguistics: ACL 2025, pages 21381–21396, Vienna, Austria. Association for Computational Linguistics.
Cite (Informal):
A Survey of Uncertainty Estimation Methods on Large Language Models (Xia et al., Findings 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/display_plenaries/2025.findings-acl.1101.pdf