What to Predict? Exploring How Sentence Structure Influences Contrast Predictions in Humans and Large Language Models

Shuqi Wang, Xufeng Duan, Zhenguang Cai


Abstract
This study examines how sentence structure shapes contrast predictions in both humans and large language models (LLMs). Using Mandarin ditransitive constructions — double object (DO, “She gave the girl the candy, but not...”) vs. prepositional object (PO, “She gave the candy to the girl, but not...”) as a testbed, we employed a sentence continuation task involving three human groups (written, spoken, and prosodically normalized spoken stimuli) and three LLMs (GPT-4o, LLaMA-3, and Qwen-2.5). Two principal findings emerged: (1) Although human participants predominantly focused on the theme (e.g., “the candy”), contrast predictions were significantly modulated by sentence structure—particularly in spoken contexts, where the sentence-final element drew more attention. (2) While LLMs showed a similar reliance on structure, they displayed a larger effect size and more closely resembled human spoken data than written data, indicating a stronger emphasis on linear order in generating contrast predictions. By adopting a unified psycholinguistic paradigm, this study advances our understanding of predictive language processing for both humans and LLMs and informs research on human–model alignment in linguistic tasks.
Anthology ID:
2025.cmcl-1.28
Volume:
Proceedings of the Workshop on Cognitive Modeling and Computational Linguistics
Month:
May
Year:
2025
Address:
Albuquerque, New Mexico, USA
Editors:
Tatsuki Kuribayashi, Giulia Rambelli, Ece Takmaz, Philipp Wicke, Jixing Li, Byung-Doh Oh
Venues:
CMCL | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
244–252
Language:
URL:
https://preview.aclanthology.org/fix-sig-urls/2025.cmcl-1.28/
DOI:
Bibkey:
Cite (ACL):
Shuqi Wang, Xufeng Duan, and Zhenguang Cai. 2025. What to Predict? Exploring How Sentence Structure Influences Contrast Predictions in Humans and Large Language Models. In Proceedings of the Workshop on Cognitive Modeling and Computational Linguistics, pages 244–252, Albuquerque, New Mexico, USA. Association for Computational Linguistics.
Cite (Informal):
What to Predict? Exploring How Sentence Structure Influences Contrast Predictions in Humans and Large Language Models (Wang et al., CMCL 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/fix-sig-urls/2025.cmcl-1.28.pdf