Are Larger Language Models Better at Disambiguation?

Ziyuan Cao, William Schuler


Abstract
Humans deal with temporary syntactic ambiguity all the time in incremental sentence processing. Sentences with temporary ambiguity that causes processing difficulties, often reflected by increase in reading time, are referred to as garden-path sentences. Garden-path theories of sentence processing attribute the increases in reading time to the reanalysis of the previously ambiguous syntactic structure to make it consistent with the new disambiguating text. It is unknown whether transformer-based language models successfully resolve the temporary ambiguity after encountering the disambiguating text. We investigated this question by analyzing completions generated from language models for a type of garden-path sentence with ambiguity between a complement clause interpretation and a relative clause interpretation. We found that larger language models are worse at resolving such ambiguity.
Anthology ID:
2025.cmcl-1.20
Volume:
Proceedings of the Workshop on Cognitive Modeling and Computational Linguistics
Month:
May
Year:
2025
Address:
Albuquerque, New Mexico, USA
Editors:
Tatsuki Kuribayashi, Giulia Rambelli, Ece Takmaz, Philipp Wicke, Jixing Li, Byung-Doh Oh
Venues:
CMCL | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
155–164
Language:
URL:
https://preview.aclanthology.org/fix-sig-urls/2025.cmcl-1.20/
DOI:
Bibkey:
Cite (ACL):
Ziyuan Cao and William Schuler. 2025. Are Larger Language Models Better at Disambiguation?. In Proceedings of the Workshop on Cognitive Modeling and Computational Linguistics, pages 155–164, Albuquerque, New Mexico, USA. Association for Computational Linguistics.
Cite (Informal):
Are Larger Language Models Better at Disambiguation? (Cao & Schuler, CMCL 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/fix-sig-urls/2025.cmcl-1.20.pdf