SynClaimEval: A Framework for Evaluating the Utility of Synthetic Data in Long-Context Claim Verification

Mohamed Elaraby, Jyoti Prakash Maheswari


Abstract
Large Language Models (LLMs) with extended context windows promise direct reasoning over long documents, reducing the need for chunking or retrieval. Constructing annotated resources for training and evaluation, however, remains costly. Synthetic data offers a scalable alternative, and we introduce SynClaimEval, a framework for evaluating synthetic data utility in long-context claim verification—a task central to hallucination detection and fact-checking. Our framework examines three dimensions: (i) input characteristics, by varying context length and testing generalization to out-of-domain benchmarks; (ii) synthesis logic, by controlling claim complexity and error type variation; and (iii) explanation quality, measuring the degree to which model explanations provide evidence consistent with predictions. Experiments across benchmarks show that long-context synthesis can improve verification in base instruction-tuned models, particularly when augmenting existing human-written datasets. Moreover, synthesis enhances explanation quality, even when verification scores don’t improve, underscoring its potential to strengthen both performance and explainability.
Anthology ID:
2025.eval4nlp-1.8
Volume:
Proceedings of the 5th Workshop on Evaluation and Comparison of NLP Systems
Month:
December
Year:
2025
Address:
Mumbai, India
Editors:
Mousumi Akter, Tahiya Chowdhury, Steffen Eger, Christoph Leiter, Juri Opitz, Erion Çano
Venues:
Eval4NLP | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
91–108
Language:
URL:
https://preview.aclanthology.org/ingest-ijcnlp-aacl/2025.eval4nlp-1.8/
DOI:
Bibkey:
Cite (ACL):
Mohamed Elaraby and Jyoti Prakash Maheswari. 2025. SynClaimEval: A Framework for Evaluating the Utility of Synthetic Data in Long-Context Claim Verification. In Proceedings of the 5th Workshop on Evaluation and Comparison of NLP Systems, pages 91–108, Mumbai, India. Association for Computational Linguistics.
Cite (Informal):
SynClaimEval: A Framework for Evaluating the Utility of Synthetic Data in Long-Context Claim Verification (Elaraby & Maheswari, Eval4NLP 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-ijcnlp-aacl/2025.eval4nlp-1.8.pdf