Marlene Lutz
2026
Do Psychometric Tests Work for Large Language Models? Evaluation of Tests on Sexism, Racism, and Morality
Jana Jung | Marlene Lutz | Indira Sen | Markus Strohmaier
Proceedings of the 19th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long Papers)
Jana Jung | Marlene Lutz | Indira Sen | Markus Strohmaier
Proceedings of the 19th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long Papers)
Psychometric tests are increasingly used to assess psychological constructs in large language models (LLMs). However, it remains unclear whether these tests – originally developed for humans – yield meaningful results when applied to LLMs. In this study, we systematically evaluate the reliability and validity of human psychometric tests on 17 LLMs for three constructs: sexism, racism, and morality. We find moderate reliability across multiple item and prompt variations. Validity is evaluated through both convergent (i.e., testing theory-based inter-test correlations) and ecological approaches (i.e., testing the alignment between tests scores and behavior in real-world downstream tasks). Crucially, we find that psychometric test scores do not align, and in some cases even negatively correlate with, model behavior in downstream tasks, indicating low ecological validity. Our results highlight that systematic evaluations of psychometric tests on LLMs are essential before interpreting their scores. Our findings also suggest that psychometric tests designed for humans cannot be applied directly to LLMs without adaptation.
Persona-driven Simulation of Voting Behavior in the European Parliament with Large Language Models
Maximilian Kreutner | Marlene Lutz | Markus Strohmaier
Findings of the Association for Computational Linguistics: EACL 2026
Maximilian Kreutner | Marlene Lutz | Markus Strohmaier
Findings of the Association for Computational Linguistics: EACL 2026
Large Language Models (LLMs) display remarkable capabilities to understand or even produce political discourse but have been found to consistently exhibit a progressive left-leaning bias. At the same time, so-called persona or identity prompts have been shown to produce LLM behavior that aligns with socioeconomic groups with which the base model is not aligned. In this work, we analyze whether zero-shot persona prompting with limited information can accurately predict individual voting decisions and, by aggregation, accurately predict the positions of European groups on a diverse set of policies.We evaluate whether predictions are stable in response to counterfactual arguments, different persona prompts, and generation methods. Finally, we find that we can simulate the voting behavior of Members of the European Parliament reasonably well, achieving a weighted F1 score of approximately 0.793. Our persona dataset of politicians in the 2024 European Parliament and our code are available at the following url: https://github.com/dess-mannheim/european_parliament_simulation.
2025
Missing the Margins: A Systematic Literature Review on the Demographic Representativeness of LLMs
Indira Sen | Marlene Lutz | Elisa Rogers | David Garcia | Markus Strohmaier
Findings of the Association for Computational Linguistics: ACL 2025
Indira Sen | Marlene Lutz | Elisa Rogers | David Garcia | Markus Strohmaier
Findings of the Association for Computational Linguistics: ACL 2025
Many applications of Large Language Models (LLMs) require them to either simulate people or offer personalized functionality, making the demographic representativeness of LLMs crucial for equitable utility. At the same time, we know little about the extent to which these models actually reflect the demographic attributes and behaviors of certain groups or populations, with conflicting findings in empirical research. To shed light on this debate, we review 211 papers on the demographic representativeness of LLMs. We find that while 29% of the studies report positive conclusions on the representativeness of LLMs, 30% of these do not evaluate LLMs across multiple demographic categories or within demographic subcategories. Another 35% and 47% of the papers concluding positively fail to specify these subcategories altogether for gender and race, respectively. Of the articles that do report subcategories, fewer than half include marginalized groups in their study. Finally, more than a third of the papers do not define the target population to whom their findings apply; of those that do define it either implicitly or explicitly, a large majority study only the U.S. Taken together, our findings suggest an inflated perception of LLM representativeness in the broader community. We recommend more precise evaluation methods and comprehensive documentation of demographic attributes to ensure the responsible use of LLMs for social applications.
The Prompt Makes the Person(a): A Systematic Evaluation of Sociodemographic Persona Prompting for Large Language Models
Marlene Lutz | Indira Sen | Georg Ahnert | Elisa Rogers | Markus Strohmaier
Findings of the Association for Computational Linguistics: EMNLP 2025
Marlene Lutz | Indira Sen | Georg Ahnert | Elisa Rogers | Markus Strohmaier
Findings of the Association for Computational Linguistics: EMNLP 2025
Persona prompting is increasingly used in large language models (LLMs) to simulate views of various sociodemographic groups. However, how a persona prompt is formulated can significantly affect outcomes, raising concerns about the fidelity of such simulations. Using five open-source LLMs, we systematically examine how different persona prompt strategies, specifically role adoption formats and demographic priming strategies, influence LLM simulations across 15 intersectional demographic groups in both open- and closed-ended tasks. Our findings show that LLMs struggle to simulate marginalized groups but that the choice of demographic priming and role adoption strategy significantly impacts their portrayal. Specifically, we find that prompting in an interview-style format and name-based priming can help reduce stereotyping and improve alignment. Surprisingly, smaller models like OLMo-2-7B outperform larger ones such as Llama-3.3-70B.Our findings offer actionable guidance for designing sociodemographic persona prompts in LLM-based simulation studies.
2024
Local Contrastive Editing of Gender Stereotypes
Marlene Lutz | Rochelle Choenni | Markus Strohmaier | Anne Lauscher
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
Marlene Lutz | Rochelle Choenni | Markus Strohmaier | Anne Lauscher
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
Stereotypical bias encoded in language models (LMs) poses a threat to safe language technology, yet our understanding of how bias manifests in the parameters of LMs remains incomplete. We introduce local contrastive editing that enables the localization and editing of a subset of weights in a target model in relation to a reference model. We deploy this approach to identify and modify subsets of weights that are associated with gender stereotypes in LMs. Through a series of experiments we demonstrate that local contrastive editing can precisely localize and control a small subset (< 0.5%) of weights that encode gender bias. Our work (i) advances our understanding of how stereotypical biases can manifest in the parameter space of LMs and (ii) opens up new avenues for developing parameter-efficient strategies for controlling model properties in a contrastive manner.
2022
SensePOLAR: Word sense aware interpretability for pre-trained contextual word embeddings
Jan Engler | Sandipan Sikdar | Marlene Lutz | Markus Strohmaier
Findings of the Association for Computational Linguistics: EMNLP 2022
Jan Engler | Sandipan Sikdar | Marlene Lutz | Markus Strohmaier
Findings of the Association for Computational Linguistics: EMNLP 2022
Adding interpretability to word embeddings represents an area of active research in textrepresentation. Recent work has explored the potential of embedding words via so-called polardimensions (e.g. good vs. bad, correct vs. wrong). Examples of such recent approachesinclude SemAxis, POLAR, FrameAxis, and BiImp. Although these approaches provide interpretabledimensions for words, they have not been designed to deal with polysemy, i.e. they can not easily distinguish between different senses of words. To address this limitation, we present SensePOLAR, an extension of the original POLAR framework that enables wordsense aware interpretability for pre-trained contextual word embeddings. The resulting interpretable word embeddings achieve a level ofperformance that is comparable to original contextual word embeddings across a variety ofnatural language processing tasks including the GLUE and SQuAD benchmarks. Our workremoves a fundamental limitation of existing approaches by offering users sense aware interpretationsfor contextual word embeddings.