MotiveBench: How Far Are We From Human-Like Motivational Reasoning in Large Language Models?

Xixian Yong, Jianxun Lian, Xiaoyuan Yi, Xiao Zhou, Xing Xie


Abstract
Large language models (LLMs) have been widely adopted as the core of agent frameworks in various scenarios, such as social simulations and AI companions. However, the extent to which they can replicate human-like motivations remains an underexplored question. Existing benchmarks are constrained by simplistic scenarios and the absence of character identities, resulting in an information asymmetry with real-world situations. To address this gap, we propose MotiveBench, which consists of 200 rich contextual scenarios and 600 reasoning tasks covering multiple levels of motivation. Using MotiveBench, we conduct extensive experiments on seven popular model families, comparing different scales and versions within each family. The results show that even the most advanced LLMs still fall short in achieving human-like motivational reasoning. Our analysis reveals key findings, including the difficulty LLMs face in reasoning about “love & belonging” motivations and their tendency toward excessive rationality and idealism. These insights highlight a promising direction for future research on the humanization of LLMs.
Anthology ID:
2025.findings-acl.1029
Volume:
Findings of the Association for Computational Linguistics: ACL 2025
Month:
July
Year:
2025
Address:
Vienna, Austria
Editors:
Wanxiang Che, Joyce Nabende, Ekaterina Shutova, Mohammad Taher Pilehvar
Venues:
Findings | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
20059–20089
Language:
URL:
https://preview.aclanthology.org/ingestion-acl-25/2025.findings-acl.1029/
DOI:
Bibkey:
Cite (ACL):
Xixian Yong, Jianxun Lian, Xiaoyuan Yi, Xiao Zhou, and Xing Xie. 2025. MotiveBench: How Far Are We From Human-Like Motivational Reasoning in Large Language Models?. In Findings of the Association for Computational Linguistics: ACL 2025, pages 20059–20089, Vienna, Austria. Association for Computational Linguistics.
Cite (Informal):
MotiveBench: How Far Are We From Human-Like Motivational Reasoning in Large Language Models? (Yong et al., Findings 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingestion-acl-25/2025.findings-acl.1029.pdf