Tackling Hallucinations in Neural Chart Summarization

Saad Obaid ul Islam, Iza Škrjanec, Ondrej Dusek, Vera Demberg


Abstract
Hallucinations in text generation occur when the system produces text that is not grounded in the input. In this work, we tackle the problem of hallucinations in neural chart summarization. Our analysis shows that the target side of chart summarization training datasets often contains additional information, leading to hallucinations. We propose a natural language inference (NLI) based method to preprocess the training data and show through human evaluation that our method significantly reduces hallucinations. We also found that shortening long-distance dependencies in the input sequence and adding chart-related information like title and legends improves the overall performance.
Anthology ID:
2023.inlg-main.30
Volume:
Proceedings of the 16th International Natural Language Generation Conference
Month:
September
Year:
2023
Address:
Prague, Czechia
Editors:
C. Maria Keet, Hung-Yi Lee, Sina Zarrieß
Venues:
INLG | SIGDIAL
SIG:
SIGGEN
Publisher:
Association for Computational Linguistics
Note:
Pages:
414–423
Language:
URL:
https://aclanthology.org/2023.inlg-main.30
DOI:
10.18653/v1/2023.inlg-main.30
Bibkey:
Cite (ACL):
Saad Obaid ul Islam, Iza Škrjanec, Ondrej Dusek, and Vera Demberg. 2023. Tackling Hallucinations in Neural Chart Summarization. In Proceedings of the 16th International Natural Language Generation Conference, pages 414–423, Prague, Czechia. Association for Computational Linguistics.
Cite (Informal):
Tackling Hallucinations in Neural Chart Summarization (Obaid ul Islam et al., INLG-SIGDIAL 2023)
Copy Citation:
PDF:
https://preview.aclanthology.org/nschneid-patch-3/2023.inlg-main.30.pdf