Mubashir Ali
2025
OldJoe at AVeriTeC: In-context learning for fact-checking
Farah Ftouhi
|
Russel Dsouza
|
Lance Calvin Lim Gamboa
|
Asim Abbas
|
Mubashir Ali
|
Yue Feng
|
Mark G. Lee
|
Venelin Kovatchev
Proceedings of the Eighth Fact Extraction and VERification Workshop (FEVER)
In this paper, we present the system proposed by our team OldJoe, for the 8th edition of the AVeriTeC shared task, as part of the FEVER workshop. The objective of this task is to verify the factuality of real-world claims. Our approach integrates open source large language models, SQL, and in-context learning. We begin with embedding the knowledge store using a pretrained embedding language model then storing the outputs in a SQL database. Subsequently, we prompt an LLM to craft relevant questions based on the input claim, which are then used to guide the retrieval process. We further prompt the LLM to generate answers to the questions and predict the veracity of the original claim. Our system scored 0.49 on the HU-METEOR AVeriTeC score on the dev set and 0.15 on the Ev2R recall on the test set. Due to the time constraint we were unable to conduct additional experiments or further hyperparameter tuning. As a result, we adopted this pipeline configuration centered on the Qwen3-14B-AWQ model as our final submission strategy. The full pipeline is available on GitHub: https://github.com/farahft/OldJoe
Structured Tender Entities Extraction from Complex Tables with Few-short Learning
Asim Abbas
|
Mark Lee
|
Niloofer Shanavas
|
Venelin Kovatchev
|
Mubashir Ali
Proceedings of the 1st Regulatory NLP Workshop (RegNLP 2025)
Extracting structured text from complex tables in PDF tender documents remains a challenging task due to the loss of structural and positional information during the extraction process. AI-based models often require extensive training data, making development from scratch both tedious and time-consuming. Our research focuses on identifying tender entities in complex table formats within PDF documents. To address this, we propose a novel approach utilizing few-shot learning with large language models (LLMs) to restore the structure of extracted text. Additionally, handcrafted rules and regular expressions are employed for precise entity classification. To evaluate the robustness of LLMs with few-shot learning, we employ data-shuffling techniques. Our experiments show that current text extraction tools fail to deliver satisfactory results for complex table structures. However, the few-shot learning approach significantly enhances the structural integrity of extracted data and improves the accuracy of tender entity identification.
Search
Fix author
Co-authors
- Asim Abbas 2
- Venelin Kovatchev 2
- Mark Lee 2
- Russel Dsouza 1
- Yue Feng 1
- show all...