Large Language Models Are Partially Primed in Pronoun Interpretation

Suet-Ying Lam, Qingcheng Zeng, Kexun Zhang, Chenyu You, Rob Voigt


Abstract
While a large body of literature suggests that large language models (LLMs) acquire rich linguistic representations, little is known about whether they adapt to linguistic biases in a human-like way. The present study probes this question by asking whether LLMs display human-like referential biases using stimuli and procedures from real psycholinguistic experiments. Recent psycholinguistic studies suggest that humans adapt their referential biases with recent exposure to referential patterns; closely replicating three relevant psycholinguistic experiments from Johnson & Arnold (2022) in an in-context learning (ICL) framework, we found that InstructGPT adapts its pronominal interpretations in response to the frequency of referential patterns in the local discourse, though in a limited fashion: adaptation was only observed relative to syntactic but not semantic biases. By contrast, FLAN-UL2 fails to generate meaningful patterns. Our results provide further evidence that contemporary LLMs discourse representations are sensitive to syntactic patterns in the local context but less so to semantic patterns. Our data and code are available at https://github.com/zkx06111/llm_priming.
Anthology ID:
2023.findings-acl.605
Volume:
Findings of the Association for Computational Linguistics: ACL 2023
Month:
July
Year:
2023
Address:
Toronto, Canada
Editors:
Anna Rogers, Jordan Boyd-Graber, Naoaki Okazaki
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
9493–9506
Language:
URL:
https://aclanthology.org/2023.findings-acl.605
DOI:
10.18653/v1/2023.findings-acl.605
Bibkey:
Cite (ACL):
Suet-Ying Lam, Qingcheng Zeng, Kexun Zhang, Chenyu You, and Rob Voigt. 2023. Large Language Models Are Partially Primed in Pronoun Interpretation. In Findings of the Association for Computational Linguistics: ACL 2023, pages 9493–9506, Toronto, Canada. Association for Computational Linguistics.
Cite (Informal):
Large Language Models Are Partially Primed in Pronoun Interpretation (Lam et al., Findings 2023)
Copy Citation:
PDF:
https://preview.aclanthology.org/nschneid-patch-5/2023.findings-acl.605.pdf