Transformers Can Model Human Hyperprediction in Buzzer Quiz

Yoichiro Yamashita, Yuto Harada, Yohei Oseki


Abstract
Humans tend to predict the next words during sentence comprehension, but under unique circumstances, they demonstrate an ability for longer coherent word sequence prediction. In this paper, we investigate whether Transformers can model such hyperprediction observed in humans during sentence processing, specifically in the context of Japanese buzzer quizzes. We conducted eye-tracking experiments where the participants read the first half of buzzer quiz questions and predicted the second half, while we modeled their reading time using the GPT-2. By modeling the reading times of each word in the first half of the question using GPT-2 surprisal, we examined under what conditions fine-tuned language models can better predict reading times. As a result, we found that GPT-2 surprisal effectively explains the reading times of quiz experts as they read the first half of the question while predicting the latter half. When the language model was fine-tuned with quiz questions, the perplexity value decreased. Lower perplexity corresponded to higher psychometric predictive power; however, excessive data for fine-tuning led to a decrease in perplexity and the fine-tuned model exhibited a low psychometric predictive power. Overall, our findings suggest that a moderate amount of data is required for fine-tuning in order to model human hyperprediction.
Anthology ID:
2025.cmcl-1.27
Volume:
Proceedings of the Workshop on Cognitive Modeling and Computational Linguistics
Month:
May
Year:
2025
Address:
Albuquerque, New Mexico, USA
Editors:
Tatsuki Kuribayashi, Giulia Rambelli, Ece Takmaz, Philipp Wicke, Jixing Li, Byung-Doh Oh
Venues:
CMCL | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
232–243
Language:
URL:
https://preview.aclanthology.org/fix-sig-urls/2025.cmcl-1.27/
DOI:
Bibkey:
Cite (ACL):
Yoichiro Yamashita, Yuto Harada, and Yohei Oseki. 2025. Transformers Can Model Human Hyperprediction in Buzzer Quiz. In Proceedings of the Workshop on Cognitive Modeling and Computational Linguistics, pages 232–243, Albuquerque, New Mexico, USA. Association for Computational Linguistics.
Cite (Informal):
Transformers Can Model Human Hyperprediction in Buzzer Quiz (Yamashita et al., CMCL 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/fix-sig-urls/2025.cmcl-1.27.pdf