GQC: LLM-Based Grouped QA Consolidation for Open-Domain Fact Verification at AVeriTeC
Dongzhuoran Zhou, Roxana Pop, Yuqicheng Zhu, Evgeny Kharlamov
Abstract
Structured fact verification benchmarks like AVeriTeC decompose claims into QA pairs to support fine-grained reasoning. However, current systems generate QA pairs independently for each evidence sentence, leading to redundancy, drift, and noise. We introduce a modular LLM-based QA consolidation module that jointly filters, clusters, and rewrites QA pairs at the claim level. Experiments show that this method improves evidence quality and veracity prediction accuracy. Our analysis also highlights the impact of model scale and alignment on downstream performance.- Anthology ID:
- 2025.fever-1.11
- Volume:
- Proceedings of the Eighth Fact Extraction and VERification Workshop (FEVER)
- Month:
- July
- Year:
- 2025
- Address:
- Vienna, Austria
- Editors:
- Mubashara Akhtar, Rami Aly, Christos Christodoulopoulos, Oana Cocarascu, Zhijiang Guo, Arpit Mittal, Michael Schlichtkrull, James Thorne, Andreas Vlachos
- Venues:
- FEVER | WS
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 151–161
- Language:
- URL:
- https://preview.aclanthology.org/acl25-workshop-ingestion/2025.fever-1.11/
- DOI:
- Cite (ACL):
- Dongzhuoran Zhou, Roxana Pop, Yuqicheng Zhu, and Evgeny Kharlamov. 2025. GQC: LLM-Based Grouped QA Consolidation for Open-Domain Fact Verification at AVeriTeC. In Proceedings of the Eighth Fact Extraction and VERification Workshop (FEVER), pages 151–161, Vienna, Austria. Association for Computational Linguistics.
- Cite (Informal):
- GQC: LLM-Based Grouped QA Consolidation for Open-Domain Fact Verification at AVeriTeC (Zhou et al., FEVER 2025)
- PDF:
- https://preview.aclanthology.org/acl25-workshop-ingestion/2025.fever-1.11.pdf