Gender Bias in Natural Language Processing Across Human Languages
Abigail Matthews, Isabella Grasso, Christopher Mahoney, Yan Chen, Esma Wali, Thomas Middleton, Mariama Njie, Jeanna Matthews
Abstract
Natural Language Processing (NLP) systems are at the heart of many critical automated decision-making systems making crucial recommendations about our future world. Gender bias in NLP has been well studied in English, but has been less studied in other languages. In this paper, a team including speakers of 9 languages - Chinese, Spanish, English, Arabic, German, French, Farsi, Urdu, and Wolof - reports and analyzes measurements of gender bias in the Wikipedia corpora for these 9 languages. We develop extensions to profession-level and corpus-level gender bias metric calculations originally designed for English and apply them to 8 other languages, including languages that have grammatically gendered nouns including different feminine, masculine, and neuter profession words. We discuss future work that would benefit immensely from a computational linguistics perspective.- Anthology ID:
- 2021.trustnlp-1.6
- Volume:
- Proceedings of the First Workshop on Trustworthy Natural Language Processing
- Month:
- June
- Year:
- 2021
- Address:
- Online
- Editors:
- Yada Pruksachatkun, Anil Ramakrishna, Kai-Wei Chang, Satyapriya Krishna, Jwala Dhamala, Tanaya Guha, Xiang Ren
- Venue:
- TrustNLP
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 45–54
- Language:
- URL:
- https://aclanthology.org/2021.trustnlp-1.6
- DOI:
- 10.18653/v1/2021.trustnlp-1.6
- Cite (ACL):
- Abigail Matthews, Isabella Grasso, Christopher Mahoney, Yan Chen, Esma Wali, Thomas Middleton, Mariama Njie, and Jeanna Matthews. 2021. Gender Bias in Natural Language Processing Across Human Languages. In Proceedings of the First Workshop on Trustworthy Natural Language Processing, pages 45–54, Online. Association for Computational Linguistics.
- Cite (Informal):
- Gender Bias in Natural Language Processing Across Human Languages (Matthews et al., TrustNLP 2021)
- PDF:
- https://preview.aclanthology.org/naacl-24-ws-corrections/2021.trustnlp-1.6.pdf