On A Scale From 1 to 5: Quantifying Hallucination in Faithfulness Evaluation

Xiaonan Jing, Srinivas Billa, Danny Godbout


Abstract
Hallucination has been a popular topic in natural language generation (NLG). In real-world applications, unfaithful content can result in poor data quality or loss of trust from end users. Thus, it is crucial to fact-check before adopting NLG for production usage, which can be expensive if done manually. In this paper, we investigate automated faithfulness evaluation in guided NLG. We developed a rubric template and used large language models (LLMs) to score the generation on quantifiable scales. We compared popular LLMs as well as widely adopted natural language inference (NLI) models in scoring quality and sensitivity. In addition, we developed methods for the generation of synthetic unfaithful data, as well as heuristics to quantify the percentage of hallucination. Our results on 4 travel-domain industry dataset show that GPT-4 can provide accurate judgement and explanation of whether a source and a generation are factually consistent. Furthermore, we found that tuning NLI models on synthetic data can improve performance. Lastly, we present insights on the latency and cost of deploying such a system.
Anthology ID:
2025.findings-naacl.433
Volume:
Findings of the Association for Computational Linguistics: NAACL 2025
Month:
April
Year:
2025
Address:
Albuquerque, New Mexico
Editors:
Luis Chiruzzo, Alan Ritter, Lu Wang
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
7765–7780
Language:
URL:
https://preview.aclanthology.org/fix-sig-urls/2025.findings-naacl.433/
DOI:
Bibkey:
Cite (ACL):
Xiaonan Jing, Srinivas Billa, and Danny Godbout. 2025. On A Scale From 1 to 5: Quantifying Hallucination in Faithfulness Evaluation. In Findings of the Association for Computational Linguistics: NAACL 2025, pages 7765–7780, Albuquerque, New Mexico. Association for Computational Linguistics.
Cite (Informal):
On A Scale From 1 to 5: Quantifying Hallucination in Faithfulness Evaluation (Jing et al., Findings 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/fix-sig-urls/2025.findings-naacl.433.pdf