Overestimation of Syntactic Representation in Neural Language Models

Jordan Kodner, Nitish Gupta


Abstract
With the advent of powerful neural language models over the last few years, research attention has increasingly focused on what aspects of language they represent that make them so successful. Several testing methodologies have been developed to probe models’ syntactic representations. One popular method for determining a model’s ability to induce syntactic structure trains a model on strings generated according to a template then tests the model’s ability to distinguish such strings from superficially similar ones with different syntax. We illustrate a fundamental problem with this approach by reproducing positive results from a recent paper with two non-syntactic baseline language models: an n-gram model and an LSTM model trained on scrambled inputs.
Anthology ID:
2020.acl-main.160
Volume:
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics
Month:
July
Year:
2020
Address:
Online
Editors:
Dan Jurafsky, Joyce Chai, Natalie Schluter, Joel Tetreault
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1757–1762
Language:
URL:
https://aclanthology.org/2020.acl-main.160
DOI:
10.18653/v1/2020.acl-main.160
Bibkey:
Cite (ACL):
Jordan Kodner and Nitish Gupta. 2020. Overestimation of Syntactic Representation in Neural Language Models. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 1757–1762, Online. Association for Computational Linguistics.
Cite (Informal):
Overestimation of Syntactic Representation in Neural Language Models (Kodner & Gupta, ACL 2020)
Copy Citation:
PDF:
https://preview.aclanthology.org/landing_page/2020.acl-main.160.pdf
Video:
 http://slideslive.com/38928906