MALTO at SemEval-2024 Task 6: Leveraging Synthetic Data for LLM Hallucination Detection

Federico Borra, Claudio Savelli, Giacomo Rosso, Alkis Koudounas, Flavio Giobergia


Abstract
In Natural Language Generation (NLG), contemporary Large Language Models (LLMs) face several challenges, such as generating fluent yet inaccurate outputs and reliance on fluency-centric metrics. This often leads to neural networks exhibiting “hallucinations.” The SHROOM challenge focuses on automatically identifying these hallucinations in the generated text. To tackle these issues, we introduce two key components, a data augmentation pipeline incorporating LLM-assisted pseudo-labelling and sentence rephrasing, and a voting ensemble from three models pre-trained on Natural Language Inference (NLI) tasks and fine-tuned on diverse datasets.
Anthology ID:
2024.semeval-1.240
Volume:
Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)
Month:
June
Year:
2024
Address:
Mexico City, Mexico
Editors:
Atul Kr. Ojha, A. Seza Doğruöz, Harish Tayyar Madabushi, Giovanni Da San Martino, Sara Rosenthal, Aiala Rosá
Venue:
SemEval
SIG:
SIGLEX
Publisher:
Association for Computational Linguistics
Note:
Pages:
1678–1684
Language:
URL:
https://aclanthology.org/2024.semeval-1.240
DOI:
Bibkey:
Cite (ACL):
Federico Borra, Claudio Savelli, Giacomo Rosso, Alkis Koudounas, and Flavio Giobergia. 2024. MALTO at SemEval-2024 Task 6: Leveraging Synthetic Data for LLM Hallucination Detection. In Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024), pages 1678–1684, Mexico City, Mexico. Association for Computational Linguistics.
Cite (Informal):
MALTO at SemEval-2024 Task 6: Leveraging Synthetic Data for LLM Hallucination Detection (Borra et al., SemEval 2024)
Copy Citation:
PDF:
https://preview.aclanthology.org/jeptaln-2024-ingestion/2024.semeval-1.240.pdf
Supplementary material:
 2024.semeval-1.240.SupplementaryMaterial.zip
Supplementary material:
 2024.semeval-1.240.SupplementaryMaterial.txt