The Law of Knowledge Overshadowing: Towards Understanding, Predicting, and Preventing LLM Hallucination
Yuji Zhang, Sha Li, Cheng Qian, Jiateng Liu, Pengfei Yu, Chi Han, Yi R. Fung, Kathleen McKeown, ChengXiang Zhai, Manling Li, Heng Ji
Abstract
Hallucination is a persistent challenge in large language models (LLMs), where even with rigorous quality control, models often generate distorted facts. This paradox, in which error generation continues despite high-quality training data, calls for a deeper understanding of the underlying LLM mechanisms. To address it, we propose a novel concept: knowledge overshadowing, where model’s dominant knowledge can obscure less prominent knowledge during text generation, causing the model to fabricate inaccurate details. Building on this idea, we introduce a novel framework to quantify factual hallucinations by modeling knowledge overshadowing. Central to our approach is the log-linear law, which predicts that the rate of factual hallucination increases linearly with the logarithmic scale of (1) Knowledge Popularity, (2) Knowledge Length, and (3) Model Size. The law provides a means to preemptively quantify hallucinations, offering foresight into their occurrence even before model training or inference. Built on the overshadowing effect, we propose a new decoding strategy CoDA, to mitigate hallucinations, which notably enhances model factuality on Overshadow (27.9%), MemoTrap (13.1%) and NQ-Swap (18.3%). Our findings not only deepen understandings of the underlying mechanisms behind hallucinations but also provide actionable insights for developing more predictable and controllable language models.- Anthology ID:
- 2025.fever-1.10
- Volume:
- Proceedings of the Eighth Fact Extraction and VERification Workshop (FEVER)
- Month:
- July
- Year:
- 2025
- Address:
- Vienna, Austria
- Editors:
- Mubashara Akhtar, Rami Aly, Christos Christodoulopoulos, Oana Cocarascu, Zhijiang Guo, Arpit Mittal, Michael Schlichtkrull, James Thorne, Andreas Vlachos
- Venues:
- FEVER | WS
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 132–150
- Language:
- URL:
- https://preview.aclanthology.org/acl25-workshop-ingestion/2025.fever-1.10/
- DOI:
- Cite (ACL):
- Yuji Zhang, Sha Li, Cheng Qian, Jiateng Liu, Pengfei Yu, Chi Han, Yi R. Fung, Kathleen McKeown, ChengXiang Zhai, Manling Li, and Heng Ji. 2025. The Law of Knowledge Overshadowing: Towards Understanding, Predicting, and Preventing LLM Hallucination. In Proceedings of the Eighth Fact Extraction and VERification Workshop (FEVER), pages 132–150, Vienna, Austria. Association for Computational Linguistics.
- Cite (Informal):
- The Law of Knowledge Overshadowing: Towards Understanding, Predicting, and Preventing LLM Hallucination (Zhang et al., FEVER 2025)
- PDF:
- https://preview.aclanthology.org/acl25-workshop-ingestion/2025.fever-1.10.pdf