Relating Word Embedding Gender Biases to Gender Gaps: A Cross-Cultural Analysis

Scott Friedman, Sonja Schmer-Galunder, Anthony Chen, Jeffrey Rye


Abstract
Modern models for common NLP tasks often employ machine learning techniques and train on journalistic, social media, or other culturally-derived text. These have recently been scrutinized for racial and gender biases, rooting from inherent bias in their training text. These biases are often sub-optimal and recent work poses methods to rectify them; however, these biases may shed light on actual racial or gender gaps in the culture(s) that produced the training text, thereby helping us understand cultural context through big data. This paper presents an approach for quantifying gender bias in word embeddings, and then using them to characterize statistical gender gaps in education, politics, economics, and health. We validate these metrics on 2018 Twitter data spanning 51 U.S. regions and 99 countries. We correlate state and country word embedding biases with 18 international and 5 U.S.-based statistical gender gaps, characterizing regularities and predictive strength.
Anthology ID:
W19-3803
Volume:
Proceedings of the First Workshop on Gender Bias in Natural Language Processing
Month:
August
Year:
2019
Address:
Florence, Italy
Venue:
GeBNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
18–24
Language:
URL:
https://aclanthology.org/W19-3803
DOI:
10.18653/v1/W19-3803
Bibkey:
Cite (ACL):
Scott Friedman, Sonja Schmer-Galunder, Anthony Chen, and Jeffrey Rye. 2019. Relating Word Embedding Gender Biases to Gender Gaps: A Cross-Cultural Analysis. In Proceedings of the First Workshop on Gender Bias in Natural Language Processing, pages 18–24, Florence, Italy. Association for Computational Linguistics.
Cite (Informal):
Relating Word Embedding Gender Biases to Gender Gaps: A Cross-Cultural Analysis (Friedman et al., GeBNLP 2019)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingestion-script-update/W19-3803.pdf