FACTCHECKMATE: Preemptively Detecting and Mitigating Hallucinations in LMs

Deema Alnuhait, Neeraja Kirtane, Muhammad Khalifa, Hao Peng


Abstract
Language models (LMs) hallucinate. We inquire: Can we detect and mitigate hallucinations before they happen? This work answers this research question in the positive, by showing that the internal representations of LMs provide rich signals that can be used for this purpose. We introduce FactCheckmate, which preemptively detects hallucinations by learning a classifier that predicts whether the LM will hallucinate, based on the model’s hidden states produced over the inputs, before decoding begins. If a hallucination is detected, FactCheckmate then intervenes by adjusting the LM’s hidden states such that the model will produce more factual outputs. FactCheckmate provides fresh insights that the inner workings of LMs can be revealed by their hidden states. Practically, both its detection and mitigation models are lightweight, adding little inference overhead; FactCheckmate proves a more efficient approach for mitigating hallucinations compared to many post-hoc alternatives. We evaluate FactCheckmate over LMs of different scales and model families (including Llama, Mistral, Qwen and Gemma), across a variety of QA datasets from different domains. Our results demonstrate the effectiveness of FactCheckmate, achieving over 70% preemptive detection accuracy. On average, outputs generated by LMs with intervention are 34.4% more factual compared to those without.
Anthology ID:
2025.findings-emnlp.663
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2025
Month:
November
Year:
2025
Address:
Suzhou, China
Editors:
Christos Christodoulopoulos, Tanmoy Chakraborty, Carolyn Rose, Violet Peng
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
12413–12428
Language:
URL:
https://preview.aclanthology.org/author-page-yu-wang-polytechnic/2025.findings-emnlp.663/
DOI:
10.18653/v1/2025.findings-emnlp.663
Bibkey:
Cite (ACL):
Deema Alnuhait, Neeraja Kirtane, Muhammad Khalifa, and Hao Peng. 2025. FACTCHECKMATE: Preemptively Detecting and Mitigating Hallucinations in LMs. In Findings of the Association for Computational Linguistics: EMNLP 2025, pages 12413–12428, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
FACTCHECKMATE: Preemptively Detecting and Mitigating Hallucinations in LMs (Alnuhait et al., Findings 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/author-page-yu-wang-polytechnic/2025.findings-emnlp.663.pdf
Checklist:
 2025.findings-emnlp.663.checklist.pdf