Volkan Ustun


2025

pdf bib
Implicit Behavioral Alignment of Language Agents in High-Stakes Crowd Simulations
Yunzhe Wang | Gale Lucas | Burcin Becerik-Gerber | Volkan Ustun
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing

Language-driven generative agents have enabled large-scale social simulations with transformative uses, from interpersonal training to aiding global policy-making. However, recent studies indicate that generative agent behaviors often deviate from expert expectations and real-world data—a phenomenon we term the *Behavior-Realism Gap*. To address this, we introduce a theoretical framework called Persona-Environment Behavioral Alignment (PEBA), formulated as a distribution matching problem grounded in Lewin’s behavior equation stating that behavior is a function of the person and their environment. Leveraging PEBA, we propose PersonaEvolve (PEvo), an LLM-based optimization algorithm that iteratively refines agent personas, implicitly aligning their collective behaviors with realistic expert benchmarks within a specified environmental context. We validate PEvo in an active shooter incident simulation we developed, achieving an 84% average reduction in distributional divergence compared to no steering and a 34% improvement over explicit instruction baselines. Results also show PEvo-refined personas generalize to novel, related simulation scenarios. Our method greatly enhances behavioral realism and reliability in high-stakes social simulations. More broadly, the PEBA-PEvo framework provides a principled approach to developing trustworthy LLM-driven social simulations.

2015

pdf bib
Combining Distributed Vector Representations for Words
Justin Garten | Kenji Sagae | Volkan Ustun | Morteza Dehghani
Proceedings of the 1st Workshop on Vector Space Modeling for Natural Language Processing