Vineet Gangireddy


2022

pdf
Probing for Incremental Parse States in Autoregressive Language Models
Tiwalayo Eisape | Vineet Gangireddy | Roger Levy | Yoon Kim
Findings of the Association for Computational Linguistics: EMNLP 2022

Next-word predictions from autoregressive neural language models show remarkable sensitivity to syntax. This work evaluates the extent to which this behavior arises as a result of a learned ability to maintain implicit representations of incremental syntactic structures. We extend work in syntactic probing to the incremental setting and present several probes for extracting incomplete syntactic structure (operationalized through parse states from a stack-based parser) from autoregressive language models. We find that our probes can be used to predict model preferences on ambiguous sentence prefixes and causally intervene on model representations and steer model behavior. This suggests implicit incremental syntactic inferences underlie next-word predictions in autoregressive neural language models.