CCQA: A New Web-Scale Question Answering Dataset for Model Pre-Training
Patrick Huber, Armen Aghajanyan, Barlas Oguz, Dmytro Okhonko, Scott Yih, Sonal Gupta, Xilun Chen
Abstract
We propose a novel open-domain question-answering dataset based on the Common Crawl project. With a previously unseen number of around 130 million multilingual question-answer pairs (including about 60 million English data-points), we use our large-scale, natural, diverse and high-quality corpus to in-domain pre-train popular language models for the task of question-answering. In our experiments, we find that our Common Crawl Question Answering dataset (CCQA) achieves promising results in zero-shot, low resource and fine-tuned settings across multiple tasks, models and benchmarks.- Anthology ID:
- 2022.findings-naacl.184
- Volume:
- Findings of the Association for Computational Linguistics: NAACL 2022
- Month:
- July
- Year:
- 2022
- Address:
- Seattle, United States
- Editors:
- Marine Carpuat, Marie-Catherine de Marneffe, Ivan Vladimir Meza Ruiz
- Venue:
- Findings
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 2402–2420
- Language:
- URL:
- https://aclanthology.org/2022.findings-naacl.184
- DOI:
- 10.18653/v1/2022.findings-naacl.184
- Cite (ACL):
- Patrick Huber, Armen Aghajanyan, Barlas Oguz, Dmytro Okhonko, Scott Yih, Sonal Gupta, and Xilun Chen. 2022. CCQA: A New Web-Scale Question Answering Dataset for Model Pre-Training. In Findings of the Association for Computational Linguistics: NAACL 2022, pages 2402–2420, Seattle, United States. Association for Computational Linguistics.
- Cite (Informal):
- CCQA: A New Web-Scale Question Answering Dataset for Model Pre-Training (Huber et al., Findings 2022)
- PDF:
- https://preview.aclanthology.org/nschneid-patch-4/2022.findings-naacl.184.pdf
- Code
- facebookresearch/CCQA
- Data
- CCQA, CC100, CCNet, ELI5, GooAQ, Natural Questions, TriviaQA