Detecting Hallucinations in Scientific Claims by Combining Prompting Strategies and Internal State Classification

Yupeng Cao, Chun-Nam Yu, K.p. Subbalakshmi


Abstract
Large Language Model (LLM)–based research assistant tools demonstrate impressive capabilities, yet their outputs may contain hallucinations that compromise reliability. Therefore, detecting hallucinations in automatically generated scientific content is essential. SciHal2025: Hallucination Detection for Scientific Content challenge @ ACL 2025 provides a valuable platform for advancing this goal. This paper presents our solution to the SciHal2025 challenge. Our method combines several prompting strategies with the fine-tuned base LLMs. We first benchmark multiple LLMs on the SciHal dataset. Next, we developed a detection pipeline that integrates few-shot and chain-of-thought prompting. Hidden representations extracted from the LLMs serve as features for an auxiliary classifier, further improving accuracy. Finally, we fine-tuned the selected base LLMs to enhance end-to-end performance. In this paper, we present comprehensive experimental results and discuss the implications of our findings for future hallucination detection research for scientific content.
Anthology ID:
2025.sdp-1.30
Volume:
Proceedings of the Fifth Workshop on Scholarly Document Processing (SDP 2025)
Month:
July
Year:
2025
Address:
Vienna, Austria
Editors:
Tirthankar Ghosal, Philipp Mayr, Amanpreet Singh, Aakanksha Naik, Georg Rehm, Dayne Freitag, Dan Li, Sonja Schimmler, Anita De Waard
Venues:
sdp | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
316–327
Language:
URL:
https://preview.aclanthology.org/landing_page/2025.sdp-1.30/
DOI:
Bibkey:
Cite (ACL):
Yupeng Cao, Chun-Nam Yu, and K.p. Subbalakshmi. 2025. Detecting Hallucinations in Scientific Claims by Combining Prompting Strategies and Internal State Classification. In Proceedings of the Fifth Workshop on Scholarly Document Processing (SDP 2025), pages 316–327, Vienna, Austria. Association for Computational Linguistics.
Cite (Informal):
Detecting Hallucinations in Scientific Claims by Combining Prompting Strategies and Internal State Classification (Cao et al., sdp 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/landing_page/2025.sdp-1.30.pdf