Improve Decoding Factuality by Token-wise Cross Layer Entropy of Large Language Models

Jialiang Wu, Yi Shen, Sijia Liu, Yi Tang, Sen Song, Xiaoyi Wang, Longjun Cai


Abstract
Despite their impressive capacities, Large language models (LLMs) often struggle with the hallucination issue of generating inaccurate or fabricated content even when they possess correct knowledge. In this paper, we extend the exploration of the correlation between hidden-state prediction changes and output factuality into a deeper, token-wise level. Based on the insights , we propose cross-layer Entropy eNhanced Decoding (END), a decoding method that mitigates hallucinations without requiring extra training. END leverages inner probability changes across layers to individually quantify the factual knowledge required for each candidate token, and adjusts the final predicting distribution to prioritize tokens with higher factuality. Experiments on both hallucination and QA benchmarks demonstrate that END significantly enhances the truthfulness and informativeness of generation while maintaining robust QA accuracy. Moreover, our work provides a deeper perspective of understanding the correlations between inherent knowledge and output factuality.
Anthology ID:
2025.findings-naacl.217
Volume:
Findings of the Association for Computational Linguistics: NAACL 2025
Month:
April
Year:
2025
Address:
Albuquerque, New Mexico
Editors:
Luis Chiruzzo, Alan Ritter, Lu Wang
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
3912–3921
Language:
URL:
https://preview.aclanthology.org/fix-sig-urls/2025.findings-naacl.217/
DOI:
Bibkey:
Cite (ACL):
Jialiang Wu, Yi Shen, Sijia Liu, Yi Tang, Sen Song, Xiaoyi Wang, and Longjun Cai. 2025. Improve Decoding Factuality by Token-wise Cross Layer Entropy of Large Language Models. In Findings of the Association for Computational Linguistics: NAACL 2025, pages 3912–3921, Albuquerque, New Mexico. Association for Computational Linguistics.
Cite (Informal):
Improve Decoding Factuality by Token-wise Cross Layer Entropy of Large Language Models (Wu et al., Findings 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/fix-sig-urls/2025.findings-naacl.217.pdf