Surface Fairness, Deep Bias: A Comparative Study of Bias in Language Models

Aleksandra Sorokovikova, Pavel Chizhov, Iuliia Eremenko, Ivan P. Yamshchikov


Abstract
Modern language models are trained on large amounts of data. These data inevitably include controversial and stereotypical content, which contains all sorts of biases related to gender, origin, age, etc. As a result, the models express biased points of view or produce different results based on the assigned personality or the personality of the user. In this paper, we investigate various proxy measures of bias in large language models (LLMs). We find that evaluating models with pre-prompted personae on a multi-subject benchmark (MMLU) leads to negligible and mostly random differences in scores. However, if we reformulate the task and ask a model to grade the user’s answer, this shows more significant signs of bias. Finally, if we ask the model for salary negotiation advice, we see pronounced bias in the answers. With the recent trend for LLM assistant memory and personalization, these problems open up from a different angle: modern LLM users do not need to pre-prompt the description of their persona since the model already knows their socio-demographics.
Anthology ID:
2025.gebnlp-1.20
Volume:
Proceedings of the 6th Workshop on Gender Bias in Natural Language Processing (GeBNLP)
Month:
August
Year:
2025
Address:
Vienna, Austria
Editors:
Agnieszka Faleńska, Christine Basta, Marta Costa-jussà, Karolina Stańczak, Debora Nozza
Venues:
GeBNLP | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
206–227
Language:
URL:
https://preview.aclanthology.org/acl25-workshop-ingestion/2025.gebnlp-1.20/
DOI:
Bibkey:
Cite (ACL):
Aleksandra Sorokovikova, Pavel Chizhov, Iuliia Eremenko, and Ivan P. Yamshchikov. 2025. Surface Fairness, Deep Bias: A Comparative Study of Bias in Language Models. In Proceedings of the 6th Workshop on Gender Bias in Natural Language Processing (GeBNLP), pages 206–227, Vienna, Austria. Association for Computational Linguistics.
Cite (Informal):
Surface Fairness, Deep Bias: A Comparative Study of Bias in Language Models (Sorokovikova et al., GeBNLP 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/acl25-workshop-ingestion/2025.gebnlp-1.20.pdf