Noise, Adaptation, and Strategy: Assessing LLM Fidelity in Decision-Making

Yuanjun Feng, Vivek Choudhary, Yash Raj Shrestha


Abstract
Large language models (LLMs) are increasingly used for social-science simulations, yet most evaluations target task optimality rather than the variability and adaptation characteristic of human decision-making. We propose a process-oriented evaluation framework with progressive interventions (Intrinsicality, Instruction, and Imitation), and apply it to two classic economics tasks: the second-price auction and the newsvendor inventory problem.By default, LLMs adopt stable, conservative strategies that diverge from observed human behavior. Giving LLMs risk-framed instructions makes them behave more like humans. However, this also causes complex irregularities. Incorporating human decision trajectories via in-context learning further narrows distributional gaps, indicating that models can absorb human patterns. However, across all interventions, LLMs underexpress round-to-round variability relative to humans, revealing a persistent alignment gap in behavioral fidelity. Future evaluations of LLM-based social simulations should prioritize process-level realism.
Anthology ID:
2025.emnlp-main.391
Volume:
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2025
Address:
Suzhou, China
Editors:
Christos Christodoulopoulos, Tanmoy Chakraborty, Carolyn Rose, Violet Peng
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
7704–7717
Language:
URL:
https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.391/
DOI:
Bibkey:
Cite (ACL):
Yuanjun Feng, Vivek Choudhary, and Yash Raj Shrestha. 2025. Noise, Adaptation, and Strategy: Assessing LLM Fidelity in Decision-Making. In Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing, pages 7704–7717, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
Noise, Adaptation, and Strategy: Assessing LLM Fidelity in Decision-Making (Feng et al., EMNLP 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.391.pdf
Checklist:
 2025.emnlp-main.391.checklist.pdf