How Do Neural Sequence Models Generalize? Local and Global Cues for Out-of-Distribution Prediction

D. Anthony Bau, Jacob Andreas


Abstract
After a neural sequence model encounters an unexpected token, can its behavior be predicted? We show that RNN and transformer language models exhibit structured, consistent generalization in out-of-distribution contexts. We begin by introducing two idealized models of generalization in next-word prediction: a lexical context model in which generalization is consistent with the last word observed, and a syntactic context model in which generalization is consistent with the global structure of the input. In experiments in English, Finnish, Mandarin, and random regular languages, we demonstrate that neural language models interpolate between these two forms of generalization: their predictions are well-approximated by a log-linear combination of lexical and syntactic predictive distributions. We then show that, in some languages, noise mediates the two forms of generalization: noise applied to input tokens encourages syntactic generalization, while noise in history representations encourages lexical generalization. Finally, we offer a preliminary theoretical explanation of these results by proving that the observed interpolation behavior is expected in log-linear models with a particular feature correlation structure. These results help explain the effectiveness of two popular regularization schemes and show that aspects of sequence model generalization can be understood and controlled.
Anthology ID:
2021.emnlp-main.448
Volume:
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2021
Address:
Online and Punta Cana, Dominican Republic
Editors:
Marie-Francine Moens, Xuanjing Huang, Lucia Specia, Scott Wen-tau Yih
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
5513–5526
Language:
URL:
https://aclanthology.org/2021.emnlp-main.448
DOI:
10.18653/v1/2021.emnlp-main.448
Bibkey:
Cite (ACL):
D. Anthony Bau and Jacob Andreas. 2021. How Do Neural Sequence Models Generalize? Local and Global Cues for Out-of-Distribution Prediction. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 5513–5526, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Cite (Informal):
How Do Neural Sequence Models Generalize? Local and Global Cues for Out-of-Distribution Prediction (Bau & Andreas, EMNLP 2021)
Copy Citation:
PDF:
https://preview.aclanthology.org/naacl24-info/2021.emnlp-main.448.pdf
Video:
 https://preview.aclanthology.org/naacl24-info/2021.emnlp-main.448.mp4