@inproceedings{vladika-etal-2025-correcting,
    title = "Correcting Hallucinations in News Summaries: Exploration of Self-Correcting {LLM} Methods with External Knowledge",
    author = "Vladika, Juraj  and
      Soydemir, Ihsan  and
      Matthes, Florian",
    editor = "Akhtar, Mubashara  and
      Aly, Rami  and
      Christodoulopoulos, Christos  and
      Cocarascu, Oana  and
      Guo, Zhijiang  and
      Mittal, Arpit  and
      Schlichtkrull, Michael  and
      Thorne, James  and
      Vlachos, Andreas",
    booktitle = "Proceedings of the Eighth Fact Extraction and VERification Workshop (FEVER)",
    month = jul,
    year = "2025",
    address = "Vienna, Austria",
    publisher = "Association for Computational Linguistics",
    url = "https://preview.aclanthology.org/ingest-emnlp/2025.fever-1.9/",
    doi = "10.18653/v1/2025.fever-1.9",
    pages = "118--131",
    ISBN = "978-1-959429-53-1",
    abstract = "While large language models (LLMs) have shown remarkable capabilities to generate coherent text, they suffer from the issue of hallucinations {--} factually inaccurate statements. Among numerous approaches to tackle hallucinations, especially promising are the self-correcting methods. They leverage the multi-turn nature of LLMs to iteratively generate verification questions inquiring additional evidence, answer them with internal or external knowledge, and use that to refine the original response with the new corrections. These methods have been explored for encyclopedic generation, but less so for domains like news summaries. In this work, we investigate two state-of-the-art self-correcting systems by applying them to correct hallucinated summaries, using evidence from three search engines. We analyze the results and provide insights into systems' performance, revealing interesting practical findings on the benefits of search engine snippets and few-shot prompts, as well as high alignment of G-Eval and human evaluation."
}Markdown (Informal)
[Correcting Hallucinations in News Summaries: Exploration of Self-Correcting LLM Methods with External Knowledge](https://preview.aclanthology.org/ingest-emnlp/2025.fever-1.9/) (Vladika et al., FEVER 2025)
ACL