OldJoe at AVeriTeC: In-context learning for fact-checking

Farah Ftouhi, Russel Dsouza, Lance Calvin Lim Gamboa, Asim Abbas, Mubashir Ali, Yue Feng, Mark G. Lee, Venelin Kovatchev


Abstract
In this paper, we present the system proposed by our team OldJoe, for the 8th edition of the AVeriTeC shared task, as part of the FEVER workshop. The objective of this task is to verify the factuality of real-world claims. Our approach integrates open source large language models, SQL, and in-context learning. We begin with embedding the knowledge store using a pretrained embedding language model then storing the outputs in a SQL database. Subsequently, we prompt an LLM to craft relevant questions based on the input claim, which are then used to guide the retrieval process. We further prompt the LLM to generate answers to the questions and predict the veracity of the original claim. Our system scored 0.49 on the HU-METEOR AVeriTeC score on the dev set and 0.15 on the Ev2R recall on the test set. Due to the time constraint we were unable to conduct additional experiments or further hyperparameter tuning. As a result, we adopted this pipeline configuration centered on the Qwen3-14B-AWQ model as our final submission strategy. The full pipeline is available on GitHub: https://github.com/farahft/OldJoe
Anthology ID:
2025.fever-1.18
Volume:
Proceedings of the Eighth Fact Extraction and VERification Workshop (FEVER)
Month:
July
Year:
2025
Address:
Vienna, Austria
Editors:
Mubashara Akhtar, Rami Aly, Christos Christodoulopoulos, Oana Cocarascu, Zhijiang Guo, Arpit Mittal, Michael Schlichtkrull, James Thorne, Andreas Vlachos
Venues:
FEVER | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
238–246
Language:
URL:
https://preview.aclanthology.org/acl25-workshop-ingestion/2025.fever-1.18/
DOI:
Bibkey:
Cite (ACL):
Farah Ftouhi, Russel Dsouza, Lance Calvin Lim Gamboa, Asim Abbas, Mubashir Ali, Yue Feng, Mark G. Lee, and Venelin Kovatchev. 2025. OldJoe at AVeriTeC: In-context learning for fact-checking. In Proceedings of the Eighth Fact Extraction and VERification Workshop (FEVER), pages 238–246, Vienna, Austria. Association for Computational Linguistics.
Cite (Informal):
OldJoe at AVeriTeC: In-context learning for fact-checking (Ftouhi et al., FEVER 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/acl25-workshop-ingestion/2025.fever-1.18.pdf