An Information-Theoretic Approach and Dataset for Probing Gender Stereotypes in Multilingual Masked Language Models

Victor Steinborn, Philipp Dufter, Haris Jabbar, Hinrich Schuetze


Abstract
Bias research in NLP is a rapidly growing and developing field. Similar to CrowS-Pairs (Nangia et al., 2020), we assess gender bias in masked-language models (MLMs) by studying pairs of sentences with gender swapped person references. Most bias research focuses on and often is specific to English.Using a novel methodology for creating sentence pairs that is applicable across languages, we create, based on CrowS-Pairs, a multilingual dataset for English, Finnish, German, Indonesian and Thai.Additionally, we propose SJSD, a new bias measure based on Jensen–Shannon divergence, which we argue retains more information from the model output probabilities than other previously proposed bias measures for MLMs.Using multilingual MLMs, we find that SJSD diagnoses the same systematic biased behavior for non-English that previous studies have found for monolingual English pre-trained MLMs. SJSD outperforms the CrowS-Pairs measure, which struggles to find such biases for smaller non-English datasets.
Anthology ID:
2022.findings-naacl.69
Volume:
Findings of the Association for Computational Linguistics: NAACL 2022
Month:
July
Year:
2022
Address:
Seattle, United States
Editors:
Marine Carpuat, Marie-Catherine de Marneffe, Ivan Vladimir Meza Ruiz
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
921–932
Language:
URL:
https://aclanthology.org/2022.findings-naacl.69
DOI:
10.18653/v1/2022.findings-naacl.69
Bibkey:
Cite (ACL):
Victor Steinborn, Philipp Dufter, Haris Jabbar, and Hinrich Schuetze. 2022. An Information-Theoretic Approach and Dataset for Probing Gender Stereotypes in Multilingual Masked Language Models. In Findings of the Association for Computational Linguistics: NAACL 2022, pages 921–932, Seattle, United States. Association for Computational Linguistics.
Cite (Informal):
An Information-Theoretic Approach and Dataset for Probing Gender Stereotypes in Multilingual Masked Language Models (Steinborn et al., Findings 2022)
Copy Citation:
PDF:
https://preview.aclanthology.org/add_acl24_videos/2022.findings-naacl.69.pdf
Software:
 2022.findings-naacl.69.software.zip
Video:
 https://preview.aclanthology.org/add_acl24_videos/2022.findings-naacl.69.mp4
Code
 vsteinborn/s_jsd-multilingual-bias
Data
CrowS-PairsStereoSet