The Linearity of the Effect of Surprisal on Reading Times across Languages

Weijie Xu, Jason Chon, Tianran Liu, Richard Futrell


Abstract
In psycholinguistics, surprisal theory posits that the amount of online processing effort expended by a human comprehender per word positively correlates with the surprisal of that word given its preceding context. In addition to this overall correlation, more importantly, the specific quantitative form taken by the processing effort as a function of surprisal offers insights into the underlying cognitive mechanisms of language processing. Focusing on English, previous studies have looked into the linearity of surprisal on reading times. Here, we extend the investigation by examining eyetracking corpora of seven languages: Danish, Dutch, English, German, Japanese, Mandarin, and Russian. We find evidence for superlinearity in some languages, but the results are highly sensitive to which language model is used to estimate surprisal.
Anthology ID:
2023.findings-emnlp.1052
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2023
Month:
December
Year:
2023
Address:
Singapore
Editors:
Houda Bouamor, Juan Pino, Kalika Bali
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
15711–15721
Language:
URL:
https://aclanthology.org/2023.findings-emnlp.1052
DOI:
10.18653/v1/2023.findings-emnlp.1052
Bibkey:
Cite (ACL):
Weijie Xu, Jason Chon, Tianran Liu, and Richard Futrell. 2023. The Linearity of the Effect of Surprisal on Reading Times across Languages. In Findings of the Association for Computational Linguistics: EMNLP 2023, pages 15711–15721, Singapore. Association for Computational Linguistics.
Cite (Informal):
The Linearity of the Effect of Surprisal on Reading Times across Languages (Xu et al., Findings 2023)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-2024-clasp/2023.findings-emnlp.1052.pdf