Gender Bias Hidden Behind Chinese Word Embeddings: The Case of Chinese Adjectives

Meichun Jiao, Ziyang Luo


Abstract
Gender bias in word embeddings gradually becomes a vivid research field in recent years. Most studies in this field aim at measurement and debiasing methods with English as the target language. This paper investigates gender bias in static word embeddings from a unique perspective, Chinese adjectives. By training word representations with different models, the gender bias behind the vectors of adjectives is assessed. Through a comparison between the produced results and a human scored data set, we demonstrate how gender bias encoded in word embeddings differentiates from people’s attitudes.
Anthology ID:
2021.gebnlp-1.2
Volume:
Proceedings of the 3rd Workshop on Gender Bias in Natural Language Processing
Month:
August
Year:
2021
Address:
Online
Editors:
Marta Costa-jussa, Hila Gonen, Christian Hardmeier, Kellie Webster
Venue:
GeBNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
8–15
Language:
URL:
https://aclanthology.org/2021.gebnlp-1.2
DOI:
10.18653/v1/2021.gebnlp-1.2
Bibkey:
Cite (ACL):
Meichun Jiao and Ziyang Luo. 2021. Gender Bias Hidden Behind Chinese Word Embeddings: The Case of Chinese Adjectives. In Proceedings of the 3rd Workshop on Gender Bias in Natural Language Processing, pages 8–15, Online. Association for Computational Linguistics.
Cite (Informal):
Gender Bias Hidden Behind Chinese Word Embeddings: The Case of Chinese Adjectives (Jiao & Luo, GeBNLP 2021)
Copy Citation:
PDF:
https://preview.aclanthology.org/nschneid-patch-4/2021.gebnlp-1.2.pdf