A Comprehensive Evaluation of Cognitive Biases in LLMs

Simon Malberg, Roman Poletukhin, Carolin Schuster, Georg Groh Groh


Abstract
We present a large-scale evaluation of 30 cognitive biases in 20 state-of-the-art large language models (LLMs) under various decision-making scenarios. Our contributions include a novel general-purpose test framework for reliable and large-scale generation of tests for LLMs, a benchmark dataset with 30,000 tests for detecting cognitive biases in LLMs, and a comprehensive assessment of the biases found in the 20 evaluated LLMs. Our work confirms and broadens previous findings suggesting the presence of cognitive biases in LLMs by reporting evidence of all 30 tested biases in at least some of the 20 LLMs. We publish our framework code and dataset to encourage future research on cognitive biases in LLMs: https://github.com/simonmalberg/cognitive-biases-in-llms.
Anthology ID:
2025.nlp4dh-1.50
Volume:
Proceedings of the 5th International Conference on Natural Language Processing for Digital Humanities
Month:
May
Year:
2025
Address:
Albuquerque, USA
Editors:
Mika Hämäläinen, Emily Öhman, Yuri Bizzoni, So Miyagawa, Khalid Alnajjar
Venues:
NLP4DH | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
578–613
Language:
URL:
https://preview.aclanthology.org/landing_page/2025.nlp4dh-1.50/
DOI:
Bibkey:
Cite (ACL):
Simon Malberg, Roman Poletukhin, Carolin Schuster, and Georg Groh Groh. 2025. A Comprehensive Evaluation of Cognitive Biases in LLMs. In Proceedings of the 5th International Conference on Natural Language Processing for Digital Humanities, pages 578–613, Albuquerque, USA. Association for Computational Linguistics.
Cite (Informal):
A Comprehensive Evaluation of Cognitive Biases in LLMs (Malberg et al., NLP4DH 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/landing_page/2025.nlp4dh-1.50.pdf