A rebuttal of two common deflationary stances against LLM cognition

Zak Hussain, Rui Mata, Dirk U. Wulff


Abstract
Large language models (LLMs) are arguably the most predictive models of human cognition available. Despite their impressive human-alignment, LLMs are often labeled as "*just* next-token predictors” that purportedly fall short of genuine cognition. We argue that these deflationary claims need further justification. Drawing on prominent cognitive and artificial intelligence research, we critically evaluate two forms of “Justaism” that dismiss LLM cognition by labeling LLMs as “just” simplistic entities without specifying or substantiating the critical capacities these models supposedly lack. Our analysis highlights the need for a more measured discussion of LLM cognition, to better inform future research and the development of artificial intelligence.
Anthology ID:
2025.findings-acl.1242
Volume:
Findings of the Association for Computational Linguistics: ACL 2025
Month:
July
Year:
2025
Address:
Vienna, Austria
Editors:
Wanxiang Che, Joyce Nabende, Ekaterina Shutova, Mohammad Taher Pilehvar
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
24208–24213
Language:
URL:
https://preview.aclanthology.org/display_plenaries/2025.findings-acl.1242/
DOI:
Bibkey:
Cite (ACL):
Zak Hussain, Rui Mata, and Dirk U. Wulff. 2025. A rebuttal of two common deflationary stances against LLM cognition. In Findings of the Association for Computational Linguistics: ACL 2025, pages 24208–24213, Vienna, Austria. Association for Computational Linguistics.
Cite (Informal):
A rebuttal of two common deflationary stances against LLM cognition (Hussain et al., Findings 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/display_plenaries/2025.findings-acl.1242.pdf