Graph-of-Thoughts for Fact-Checking with Large Language Models

Sascha Rolinger, Jin Liu


Abstract
We present a fact-checking system developed for the 2025 Automated Verification of Textual Claims (AVeriTeC) shared task, leveraging the Graph-of-Thoughts (GoT) prompting scheme. The GoT approach facilitates iterative refinement during fact-checking by conditioningquestion generation on previous answers and enabling the incorporation of multiple evidence documents per question, thereby mitigatingthe impact of factually incorrect evidence. The efficiency requirements of the shared task are addressed by restricting the width and depthof the thought graph. Additionally, an efficient stopping criterion is derived from the dataset’s Not Enough Information (NEI) label. Our system utilizes fine-tuned open-source Large Language Models (LLMs) for question generation, question answering, and final verdict prediction. Empirical results demonstrate competitive performance against top-performing systems in the AVeriTeC shared task and improvements over the baseline method. Our code is publicly available.
Anthology ID:
2025.fever-1.21
Volume:
Proceedings of the Eighth Fact Extraction and VERification Workshop (FEVER)
Month:
July
Year:
2025
Address:
Vienna, Austria
Editors:
Mubashara Akhtar, Rami Aly, Christos Christodoulopoulos, Oana Cocarascu, Zhijiang Guo, Arpit Mittal, Michael Schlichtkrull, James Thorne, Andreas Vlachos
Venues:
FEVER | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
266–273
Language:
URL:
https://preview.aclanthology.org/acl25-workshop-ingestion/2025.fever-1.21/
DOI:
Bibkey:
Cite (ACL):
Sascha Rolinger and Jin Liu. 2025. Graph-of-Thoughts for Fact-Checking with Large Language Models. In Proceedings of the Eighth Fact Extraction and VERification Workshop (FEVER), pages 266–273, Vienna, Austria. Association for Computational Linguistics.
Cite (Informal):
Graph-of-Thoughts for Fact-Checking with Large Language Models (Rolinger & Liu, FEVER 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/acl25-workshop-ingestion/2025.fever-1.21.pdf