PapersPlease: A Benchmark for Evaluating Motivational Values of Large Language Models Based on ERG Theory

Junho Myung, Yeon Su Park, Sunwoo Kim, Shin Yoo, Alice Oh


Abstract
Evaluating the performance and biases of large language models (LLMs) through role-playing scenarios is becoming increasingly common, as LLMs often exhibit biased behaviors in these contexts. Building on this line of research, we introduce PapersPlease, a benchmark consisting of 3,700 moral dilemmas designed to investigate LLMs’ decision-making in prioritizing various levels of human needs. In our setup, LLMs act as immigration inspectors deciding whether to approve or deny entry based on the short narratives of people. These narratives are constructed using the Existence, Relatedness, and Growth (ERG) theory, which categorizes human needs into three hierarchical levels. Our analysis of six LLMs reveals statistically significant patterns in decision-making, suggesting that LLMs encode implicit preferences. Additionally, our evaluation of the impact of incorporating social identities into the narratives shows varying responsiveness based on both motivational needs and identity cues, with some models exhibiting higher denial rates for marginalized identities. All data is publicly available at https://github.com/yeonsuuuu28/papers-please.
Anthology ID:
2025.gem-1.47
Volume:
Proceedings of the Fourth Workshop on Generation, Evaluation and Metrics (GEM²)
Month:
July
Year:
2025
Address:
Vienna, Austria and virtual meeting
Editors:
Kaustubh Dhole, Miruna Clinciu
Venues:
GEM | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
522–531
Language:
URL:
https://preview.aclanthology.org/corrections-2025-08/2025.gem-1.47/
DOI:
Bibkey:
Cite (ACL):
Junho Myung, Yeon Su Park, Sunwoo Kim, Shin Yoo, and Alice Oh. 2025. PapersPlease: A Benchmark for Evaluating Motivational Values of Large Language Models Based on ERG Theory. In Proceedings of the Fourth Workshop on Generation, Evaluation and Metrics (GEM²), pages 522–531, Vienna, Austria and virtual meeting. Association for Computational Linguistics.
Cite (Informal):
PapersPlease: A Benchmark for Evaluating Motivational Values of Large Language Models Based on ERG Theory (Myung et al., GEM 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/corrections-2025-08/2025.gem-1.47.pdf