Surprisal from Larger Transformer-based Language Models Predicts fMRI Data More Poorly

Yi-Chien Lin, William Schuler


Abstract
There has been considerable interest in using surprisal from Transformer-based language models (LMs) as predictors of human sentence processing difficulty. Recent work has observed an inverse scaling relationship between Transformers’ per-word estimated probability and the predictive power of their surprisal estimates on reading times, showing that LMs with more parameters and trained on more data are less predictive of human reading times. However, these studies focused on predicting latency-based measures. Tests on brain imaging data have not shown a trend in any direction when using a relatively small set of LMs, leaving open the possibility that the inverse scaling phenomenon is constrained to latency data. This study therefore conducted a more comprehensive evaluation using surprisal estimates from 17 pre-trained LMs across three different LM families on two functional magnetic resonance imaging (fMRI) datasets. Results show that the inverse scaling relationship between models’ per-word estimated probability and model fit on both datasets still obtains, resolving the inconclusive results of previous work and indicating that this trend is not specific to latency-based measures.
Anthology ID:
2026.eacl-short.11
Volume:
Proceedings of the 19th Conference of the European Chapter of the Association for Computational Linguistics (Volume 2: Short Papers)
Month:
March
Year:
2026
Address:
Rabat, Morocco
Editors:
Vera Demberg, Kentaro Inui, Lluís Marquez
Venue:
EACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
179–186
Language:
URL:
https://preview.aclanthology.org/ingest-eacl/2026.eacl-short.11/
DOI:
Bibkey:
Cite (ACL):
Yi-Chien Lin and William Schuler. 2026. Surprisal from Larger Transformer-based Language Models Predicts fMRI Data More Poorly. In Proceedings of the 19th Conference of the European Chapter of the Association for Computational Linguistics (Volume 2: Short Papers), pages 179–186, Rabat, Morocco. Association for Computational Linguistics.
Cite (Informal):
Surprisal from Larger Transformer-based Language Models Predicts fMRI Data More Poorly (Lin & Schuler, EACL 2026)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-eacl/2026.eacl-short.11.pdf
Checklist:
 2026.eacl-short.11.checklist.pdf