From Confidence to Collapse in LLM Factual Robustness

Alina Fastowski, Bardh Prenkaj, Gjergji Kasneci


Abstract
Ensuring the robustness of factual knowledge in LLMs is critical for reliable applications in tasks such as question answering and reasoning. However, existing evaluation methods predominantly focus on performance-based metrics, often investigating from the perspective of prompt perturbations, which captures only the externally triggered side of knowledge robustness. To bridge this gap, we introduce a principled approach to measure factual robustness from the perspective of the generation process by analyzing token distribution entropy in combination with temperature scaling sensitivity. These two factors build the Factual Robustness Score (FRS), a novel metric which quantifies the stability of a fact against perturbations in decoding conditions, given its initial uncertainty. To validate our approach, we conduct extensive experiments on 5 LLMs across 3 closed-book QA datasets (SQuAD, TriviaQA, and HotpotQA). We show that factual robustness varies significantly – smaller models report an FRS of 0.76, larger ones 0.93 – with accuracy degrading by ~60% under increased uncertainty. These insights demonstrate how entropy and temperature scaling impact factual accuracy, and lay a foundation for developing more robust knowledge retention and retrieval in future models. We release our code at https://github.com/afastowski/frs.
Anthology ID:
2025.findings-emnlp.460
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2025
Month:
November
Year:
2025
Address:
Suzhou, China
Editors:
Christos Christodoulopoulos, Tanmoy Chakraborty, Carolyn Rose, Violet Peng
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
8650–8667
Language:
URL:
https://preview.aclanthology.org/author-page-yu-wang-polytechnic/2025.findings-emnlp.460/
DOI:
10.18653/v1/2025.findings-emnlp.460
Bibkey:
Cite (ACL):
Alina Fastowski, Bardh Prenkaj, and Gjergji Kasneci. 2025. From Confidence to Collapse in LLM Factual Robustness. In Findings of the Association for Computational Linguistics: EMNLP 2025, pages 8650–8667, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
From Confidence to Collapse in LLM Factual Robustness (Fastowski et al., Findings 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/author-page-yu-wang-polytechnic/2025.findings-emnlp.460.pdf
Checklist:
 2025.findings-emnlp.460.checklist.pdf