Incremental Processing of Principle B: Mismatches Between Neural Models and Humans

Forrest Davis


Abstract
Despite neural language models qualitatively capturing many human linguistic behaviors, recent work has demonstrated that they underestimate the true processing costs of ungrammatical structures. We extend these more fine-grained comparisons between humans and models by investigating the interaction between Principle B and coreference processing. While humans use Principle B to block certain structural positions from affecting their incremental processing, we find that GPT-based language models are influenced by ungrammatical positions. We conclude by relating the mismatch between neural models and humans to properties of training data and suggest that certain aspects of human processing behavior do not directly follow from linguistic data.
Anthology ID:
2022.conll-1.11
Volume:
Proceedings of the 26th Conference on Computational Natural Language Learning (CoNLL)
Month:
December
Year:
2022
Address:
Abu Dhabi, United Arab Emirates (Hybrid)
Venue:
CoNLL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
144–156
Language:
URL:
https://aclanthology.org/2022.conll-1.11
DOI:
Bibkey:
Cite (ACL):
Forrest Davis. 2022. Incremental Processing of Principle B: Mismatches Between Neural Models and Humans. In Proceedings of the 26th Conference on Computational Natural Language Learning (CoNLL), pages 144–156, Abu Dhabi, United Arab Emirates (Hybrid). Association for Computational Linguistics.
Cite (Informal):
Incremental Processing of Principle B: Mismatches Between Neural Models and Humans (Davis, CoNLL 2022)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingestion-script-update/2022.conll-1.11.pdf