Decoding Uncertainty: The Impact of Decoding Strategies for Uncertainty Estimation in Large Language Models

Wataru Hashimoto, Hidetaka Kamigaito, Taro Watanabe


Abstract
Decoding strategies manipulate the probability distribution underlying the output of a language model and can therefore affect both generation quality and its uncertainty. In this study, we investigate the impact of decoding strategies on uncertainty estimation in Large Language Models (LLMs). Our experiments show that Contrastive Search, which mitigates repetition, yields better uncertainty estimates on average across a range of preference-aligned LLMs. In contrast, the benefits of these strategies sometimes diverge when the model is only post-trained with supervised fine-tuning, i.e. without explicit alignment.
Anthology ID:
2025.findings-emnlp.788
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2025
Month:
November
Year:
2025
Address:
Suzhou, China
Editors:
Christos Christodoulopoulos, Tanmoy Chakraborty, Carolyn Rose, Violet Peng
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
14601–14613
Language:
URL:
https://preview.aclanthology.org/author-page-yu-wang-polytechnic/2025.findings-emnlp.788/
DOI:
10.18653/v1/2025.findings-emnlp.788
Bibkey:
Cite (ACL):
Wataru Hashimoto, Hidetaka Kamigaito, and Taro Watanabe. 2025. Decoding Uncertainty: The Impact of Decoding Strategies for Uncertainty Estimation in Large Language Models. In Findings of the Association for Computational Linguistics: EMNLP 2025, pages 14601–14613, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
Decoding Uncertainty: The Impact of Decoding Strategies for Uncertainty Estimation in Large Language Models (Hashimoto et al., Findings 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/author-page-yu-wang-polytechnic/2025.findings-emnlp.788.pdf
Checklist:
 2025.findings-emnlp.788.checklist.pdf