Leveraging Human Production-Interpretation Asymmetries to Test LLM Cognitive Plausibility

Suet-Ying Lam, Qingcheng Zeng, Jingyi Wu, Rob Voigt


Abstract
Whether large language models (LLMs) process language similarly to humans has been the subject of much theoretical and practical debate. We examine this question through the lens of the production-interpretation distinction found in human sentence processing and evaluate the extent to which instruction-tuned LLMs replicate this distinction. Using an empirically documented asymmetry between pronoun production and interpretation in humans for implicit causality verbs as a testbed, we find that some LLMs do quantitatively and qualitatively reflect human-like asymmetries between production and interpretation. We demonstrate that whether this behavior holds depends upon both model size-with larger models more likely to reflect human-like patterns and the choice of meta-linguistic prompts used to elicit the behavior. Our codes and results are available here.
Anthology ID:
2025.acl-short.14
Volume:
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)
Month:
July
Year:
2025
Address:
Vienna, Austria
Editors:
Wanxiang Che, Joyce Nabende, Ekaterina Shutova, Mohammad Taher Pilehvar
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
158–171
Language:
URL:
https://preview.aclanthology.org/ingestion-acl-25/2025.acl-short.14/
DOI:
Bibkey:
Cite (ACL):
Suet-Ying Lam, Qingcheng Zeng, Jingyi Wu, and Rob Voigt. 2025. Leveraging Human Production-Interpretation Asymmetries to Test LLM Cognitive Plausibility. In Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 158–171, Vienna, Austria. Association for Computational Linguistics.
Cite (Informal):
Leveraging Human Production-Interpretation Asymmetries to Test LLM Cognitive Plausibility (Lam et al., ACL 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingestion-acl-25/2025.acl-short.14.pdf