Women Are Beautiful, Men Are Leaders: Gender Stereotypes in Machine Translation and Language Modeling

Matúš Pikuliak, Stefan Oresko, Andrea Hrckova, Marian Simko


Abstract
We present GEST – a new manually created dataset designed to measure gender-stereotypical reasoning in language models and machine translation systems. GEST contains samples for 16 gender stereotypes about men and women (e.g., Women are beautiful, Men are leaders) that are compatible with the English language and 9 Slavic languages. The definition of said stereotypes was informed by gender experts. We used GEST to evaluate English and Slavic masked LMs, English generative LMs, and machine translation systems. We discovered significant and consistent amounts of gender-stereotypical reasoning in almost all the evaluated models and languages. Our experiments confirm the previously postulated hypothesis that the larger the model, the more stereotypical it usually is.
Anthology ID:
2024.findings-emnlp.173
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2024
Month:
November
Year:
2024
Address:
Miami, Florida, USA
Editors:
Yaser Al-Onaizan, Mohit Bansal, Yun-Nung Chen
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
3060–3083
Language:
URL:
https://preview.aclanthology.org/icon-24-ingestion/2024.findings-emnlp.173/
DOI:
10.18653/v1/2024.findings-emnlp.173
Bibkey:
Cite (ACL):
Matúš Pikuliak, Stefan Oresko, Andrea Hrckova, and Marian Simko. 2024. Women Are Beautiful, Men Are Leaders: Gender Stereotypes in Machine Translation and Language Modeling. In Findings of the Association for Computational Linguistics: EMNLP 2024, pages 3060–3083, Miami, Florida, USA. Association for Computational Linguistics.
Cite (Informal):
Women Are Beautiful, Men Are Leaders: Gender Stereotypes in Machine Translation and Language Modeling (Pikuliak et al., Findings 2024)
Copy Citation:
PDF:
https://preview.aclanthology.org/icon-24-ingestion/2024.findings-emnlp.173.pdf