Do Large Language Models Learn Human-Like Strategic Preferences?

Jesse Roberts, Kyle Moore, Douglas Fisher


Abstract
In this paper, we evaluate whether LLMs learn to make human-like preference judgements in strategic scenarios as compared with known empirical results. Solar and Mistral are shown to exhibit stable value-based preference consistent with humans and exhibit human-like preference for cooperation in the prisoner’s dilemma (including stake-size effect) and traveler’s dilemma (including penalty-size effect). We establish a relationship between model size, value-based preference, and superficiality. Finally, results here show that models tending to be less brittle have relied on sliding window attention suggesting a potential link. Additionally, we contribute a novel method for constructing preference relations from arbitrary LLMs and support for a hypothesis regarding human behavior in the traveler’s dilemma.
Anthology ID:
2025.realm-1.8
Volume:
Proceedings of the 1st Workshop for Research on Agent Language Models (REALM 2025)
Month:
July
Year:
2025
Address:
Vienna, Austria
Editors:
Ehsan Kamalloo, Nicolas Gontier, Xing Han Lu, Nouha Dziri, Shikhar Murty, Alexandre Lacoste
Venues:
REALM | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
97–108
Language:
URL:
https://preview.aclanthology.org/mtsummit-25-ingestion/2025.realm-1.8/
DOI:
10.18653/v1/2025.realm-1.8
Bibkey:
Cite (ACL):
Jesse Roberts, Kyle Moore, and Douglas Fisher. 2025. Do Large Language Models Learn Human-Like Strategic Preferences?. In Proceedings of the 1st Workshop for Research on Agent Language Models (REALM 2025), pages 97–108, Vienna, Austria. Association for Computational Linguistics.
Cite (Informal):
Do Large Language Models Learn Human-Like Strategic Preferences? (Roberts et al., REALM 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/mtsummit-25-ingestion/2025.realm-1.8.pdf