ATLANTIS at SemEval-2025 Task 3 : Detecting Hallucinated Text Spans in Question Answering
Catherine Kobus, Francois Lancelot, Marion - Cecile Martin, Nawal Ould Amer
Abstract
This paper presents the contributions of the ATLANTIS team to SemEval-2025 Task 3, focusing on detecting hallucinated text spans in question answering systems. Large Language Models (LLMs) have significantly advanced Natural Language Generation (NLG) but remain susceptible to hallucinations, generating incorrect or misleading content. To address this, we explored methods both with and without external context, utilizing few-shot prompting with a LLM, token-level classification or LLM fine-tuned on synthetic data. Notably, our approaches achieved top rankings in Spanish and competitive placements in English and German. This work highlights the importance of integrating relevant context to mitigate hallucinations and demonstrate the potential of fine-tuned models and prompt engineering.- Anthology ID:
- 2025.semeval-1.145
- Volume:
- Proceedings of the 19th International Workshop on Semantic Evaluation (SemEval-2025)
- Month:
- July
- Year:
- 2025
- Address:
- Vienna, Austria
- Editors:
- Sara Rosenthal, Aiala Rosá, Debanjan Ghosh, Marcos Zampieri
- Venues:
- SemEval | WS
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 1098–1107
- Language:
- URL:
- https://preview.aclanthology.org/corrections-2025-08/2025.semeval-1.145/
- DOI:
- Cite (ACL):
- Catherine Kobus, Francois Lancelot, Marion - Cecile Martin, and Nawal Ould Amer. 2025. ATLANTIS at SemEval-2025 Task 3 : Detecting Hallucinated Text Spans in Question Answering. In Proceedings of the 19th International Workshop on Semantic Evaluation (SemEval-2025), pages 1098–1107, Vienna, Austria. Association for Computational Linguistics.
- Cite (Informal):
- ATLANTIS at SemEval-2025 Task 3 : Detecting Hallucinated Text Spans in Question Answering (Kobus et al., SemEval 2025)
- PDF:
- https://preview.aclanthology.org/corrections-2025-08/2025.semeval-1.145.pdf