Correcting Hallucinations in News Summaries: Exploration of Self-Correcting LLM Methods with External Knowledge

Juraj Vladika, Ihsan Soydemir, Florian Matthes


Abstract
While large language models (LLMs) have shown remarkable capabilities to generate coherent text, they suffer from the issue of hallucinations – factually inaccurate statements. Among numerous approaches to tackle hallucinations, especially promising are the self-correcting methods. They leverage the multi-turn nature of LLMs to iteratively generate verification questions inquiring additional evidence, answer them with internal or external knowledge, and use that to refine the original response with the new corrections. These methods have been explored for encyclopedic generation, but less so for domains like news summaries. In this work, we investigate two state-of-the-art self-correcting systems by applying them to correct hallucinated summaries, using evidence from three search engines. We analyze the results and provide insights into systems’ performance, revealing interesting practical findings on the benefits of search engine snippets and few-shot prompts, as well as high alignment of G-Eval and human evaluation.
Anthology ID:
2025.fever-1.9
Volume:
Proceedings of the Eighth Fact Extraction and VERification Workshop (FEVER)
Month:
July
Year:
2025
Address:
Vienna, Austria
Editors:
Mubashara Akhtar, Rami Aly, Christos Christodoulopoulos, Oana Cocarascu, Zhijiang Guo, Arpit Mittal, Michael Schlichtkrull, James Thorne, Andreas Vlachos
Venues:
FEVER | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
118–131
Language:
URL:
https://preview.aclanthology.org/acl25-workshop-ingestion/2025.fever-1.9/
DOI:
Bibkey:
Cite (ACL):
Juraj Vladika, Ihsan Soydemir, and Florian Matthes. 2025. Correcting Hallucinations in News Summaries: Exploration of Self-Correcting LLM Methods with External Knowledge. In Proceedings of the Eighth Fact Extraction and VERification Workshop (FEVER), pages 118–131, Vienna, Austria. Association for Computational Linguistics.
Cite (Informal):
Correcting Hallucinations in News Summaries: Exploration of Self-Correcting LLM Methods with External Knowledge (Vladika et al., FEVER 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/acl25-workshop-ingestion/2025.fever-1.9.pdf