DiaHalu: A Dialogue-level Hallucination Evaluation Benchmark for Large Language Models

Kedi Chen, Qin Chen, Jie Zhou, He Yishen, Liang He


Abstract
Though large language models (LLMs) achieve significant success in recent years, the hallucination issue remains a challenge, and numerous benchmarks are proposed for hallucination detection. Nevertheless, some of these benchmarks are not naturally generated by LLMs but are intentionally induced. Also, many merely focus on the factuality hallucination while ignoring the faithfulness hallucination. Additionally, although dialogue pattern is more widely utilized in the era of LLMs, current benchmarks only concentrate on sentence-level and passage-level hallucination. In this study, we propose DiaHalu, the first dedicated dialogue-level hallucination evaluation benchmark for LLMs to our knowledge. Initially, we integrate the collected topics into system prompts and facilitate a dialogue between two LLMs. Subsequently, we manually modify the contents that do not adhere to human language conventions and then have LLMs re-generate, simulating authentic human-machine interaction scenarios. Finally, professional scholars annotate all the samples in the dataset. DiaHalu covers four common multi-turn dialogue domains and five hallucination subtypes, extended from factuality and faithfulness hallucination. Experiments through some well-known LLMs and detection methods on the dataset show that DiaHalu is a challenging benchmark, holding significant value for further research.
Anthology ID:
2024.findings-emnlp.529
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2024
Month:
November
Year:
2024
Address:
Miami, Florida, USA
Editors:
Yaser Al-Onaizan, Mohit Bansal, Yun-Nung Chen
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
9057–9079
Language:
URL:
https://preview.aclanthology.org/fix-sig-urls/2024.findings-emnlp.529/
DOI:
10.18653/v1/2024.findings-emnlp.529
Bibkey:
Cite (ACL):
Kedi Chen, Qin Chen, Jie Zhou, He Yishen, and Liang He. 2024. DiaHalu: A Dialogue-level Hallucination Evaluation Benchmark for Large Language Models. In Findings of the Association for Computational Linguistics: EMNLP 2024, pages 9057–9079, Miami, Florida, USA. Association for Computational Linguistics.
Cite (Informal):
DiaHalu: A Dialogue-level Hallucination Evaluation Benchmark for Large Language Models (Chen et al., Findings 2024)
Copy Citation:
PDF:
https://preview.aclanthology.org/fix-sig-urls/2024.findings-emnlp.529.pdf
Data:
 2024.findings-emnlp.529.data.zip