How much pretraining data do language models need to learn syntax?

Laura Pérez-Mayos, Miguel Ballesteros, Leo Wanner


Abstract
Transformers-based pretrained language models achieve outstanding results in many well-known NLU benchmarks. However, while pretraining methods are very convenient, they are expensive in terms of time and resources. This calls for a study of the impact of pretraining data size on the knowledge of the models. We explore this impact on the syntactic capabilities of RoBERTa, using models trained on incremental sizes of raw text data. First, we use syntactic structural probes to determine whether models pretrained on more data encode a higher amount of syntactic information. Second, we perform a targeted syntactic evaluation to analyze the impact of pretraining data size on the syntactic generalization performance of the models. Third, we compare the performance of the different models on three downstream applications: part-of-speech tagging, dependency parsing and paraphrase identification. We complement our study with an analysis of the cost-benefit trade-off of training such models. Our experiments show that while models pretrained on more data encode more syntactic knowledge and perform better on downstream applications, they do not always offer a better performance across the different syntactic phenomena and come at a higher financial and environmental cost.
Anthology ID:
2021.emnlp-main.118
Volume:
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2021
Address:
Online and Punta Cana, Dominican Republic
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1571–1582
Language:
URL:
https://aclanthology.org/2021.emnlp-main.118
DOI:
10.18653/v1/2021.emnlp-main.118
Bibkey:
Cite (ACL):
Laura Pérez-Mayos, Miguel Ballesteros, and Leo Wanner. 2021. How much pretraining data do language models need to learn syntax?. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 1571–1582, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Cite (Informal):
How much pretraining data do language models need to learn syntax? (Pérez-Mayos et al., EMNLP 2021)
Copy Citation:
PDF:
https://preview.aclanthology.org/auto-file-uploads/2021.emnlp-main.118.pdf
Video:
 https://preview.aclanthology.org/auto-file-uploads/2021.emnlp-main.118.mp4
Data
GLUE