Exploring the Choice Behavior of Large Language Models

Weidong Wu, Qinlin Zhao, Hao Chen, Lexin Zhou, Defu Lian, Hong Xie


Abstract
Large Language Models (LLMs) are increasingly deployed as human assistants across various domains where they help to make choices. However, the mechanisms behind LLMs’ choice behavior remain unclear, posing risks in safety-critical situations. Inspired by the intrinsic and extrinsic motivation framework within the classic human behavioral model of Self-Determination Theory and its established research methodologies, we investigate the factors influencing LLMs’ choice behavior by constructing a virtual QA platform that includes three different experimental conditions, with four models from GPT and Llama series participating in repeated experiments. Our findings indicate that LLMs’ behavior is influenced not only by intrinsic attention bias but also by extrinsic social influence, exhibiting patterns similar to the Matthew effect and Conformity. We distinguish independent pathways of these two factors in LLMs’ behavior by self-report. This work provides new insights into understanding LLMs’ behavioral patterns, exploring their human-like characteristics.
Anthology ID:
2025.findings-acl.270
Volume:
Findings of the Association for Computational Linguistics: ACL 2025
Month:
July
Year:
2025
Address:
Vienna, Austria
Editors:
Wanxiang Che, Joyce Nabende, Ekaterina Shutova, Mohammad Taher Pilehvar
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
5194–5214
Language:
URL:
https://preview.aclanthology.org/display_plenaries/2025.findings-acl.270/
DOI:
Bibkey:
Cite (ACL):
Weidong Wu, Qinlin Zhao, Hao Chen, Lexin Zhou, Defu Lian, and Hong Xie. 2025. Exploring the Choice Behavior of Large Language Models. In Findings of the Association for Computational Linguistics: ACL 2025, pages 5194–5214, Vienna, Austria. Association for Computational Linguistics.
Cite (Informal):
Exploring the Choice Behavior of Large Language Models (Wu et al., Findings 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/display_plenaries/2025.findings-acl.270.pdf