Sang Ni
2025
FiRC-NLP at SemEval-2025 Task 3: Exploring Prompting Approaches for Detecting Hallucinations in LLMs
Wondimagegnhue Tufa
|
Fadi Hassan
|
Guillem Collell
|
Dandan Tu
|
Yi Tu
|
Sang Ni
|
Kuan Eeik Tan
Proceedings of the 19th International Workshop on Semantic Evaluation (SemEval-2025)
This paper presents a system description forthe SemEval Mu-SHROOM task, focusing ondetecting hallucination spans in the outputsof instruction-tuned Large Language Models(LLMs) across 14 languages. We comparetwo distinct approaches: Prompt-Based Ap-proach (PBA), which leverages the capabilityof LLMs to detect hallucination spans usingdifferent prompting strategies, and the Fine-Tuning-Based Approach (FBA), which fine-tunes pre-trained Language Models (LMs) toextract hallucination spans in a supervised man-ner. Our experiments reveal that PBA, espe-cially when incorporating explicit references orexternal knowledge, outperforms FBA. How-ever, the effectiveness of PBA varies across lan-guages, likely due to differences in languagerepresentation within LLMs
Search
Fix author
Co-authors
- Guillem Collell 1
- Fadi Hassan 1
- Kuan Eeik Tan 1
- Dandan Tu 1
- Yi Tu 1
- show all...