Word Frequency Does Not Predict Grammatical Knowledge in Language Models

Charles Yu, Ryan Sie, Nicolas Tedeschi, Leon Bergen


Abstract
Neural language models learn, to varying degrees of accuracy, the grammatical properties of natural languages. In this work, we investigate whether there are systematic sources of variation in the language models’ accuracy. Focusing on subject-verb agreement and reflexive anaphora, we find that certain nouns are systematically understood better than others, an effect which is robust across grammatical tasks and different language models. Surprisingly, we find that across four orders of magnitude, corpus frequency is unrelated to a noun’s performance on grammatical tasks. Finally, we find that a novel noun’s grammatical properties can be few-shot learned from various types of training data. The results present a paradox: there should be less variation in grammatical performance than is actually observed.
Anthology ID:
2020.emnlp-main.331
Volume:
Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)
Month:
November
Year:
2020
Address:
Online
Editors:
Bonnie Webber, Trevor Cohn, Yulan He, Yang Liu
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
4040–4054
Language:
URL:
https://aclanthology.org/2020.emnlp-main.331
DOI:
10.18653/v1/2020.emnlp-main.331
Bibkey:
Cite (ACL):
Charles Yu, Ryan Sie, Nicolas Tedeschi, and Leon Bergen. 2020. Word Frequency Does Not Predict Grammatical Knowledge in Language Models. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 4040–4054, Online. Association for Computational Linguistics.
Cite (Informal):
Word Frequency Does Not Predict Grammatical Knowledge in Language Models (Yu et al., EMNLP 2020)
Copy Citation:
PDF:
https://preview.aclanthology.org/emnlp-22-attachments/2020.emnlp-main.331.pdf
Video:
 https://slideslive.com/38939271
Code
 CharlesYu2000/lm-variation
Data
WebText