Conceptor Debiasing of Word Representations Evaluated on WEAT

Saket Karve, Lyle Ungar, João Sedoc


Abstract
Bias in word representations, such as Word2Vec, has been widely reported and investigated, and efforts made to debias them. We apply the debiasing conceptor for post-processing both traditional and contextualized word embeddings. Our method can simultaneously remove racial and gender biases from word representations. Unlike standard debiasing methods, the debiasing conceptor can utilize heterogeneous lists of biased words without loss in performance. Finally, our empirical experiments show that the debiasing conceptor diminishes racial and gender bias of word representations as measured using the Word Embedding Association Test (WEAT) of Caliskan et al. (2017).
Anthology ID:
W19-3806
Volume:
Proceedings of the First Workshop on Gender Bias in Natural Language Processing
Month:
August
Year:
2019
Address:
Florence, Italy
Editors:
Marta R. Costa-jussà, Christian Hardmeier, Will Radford, Kellie Webster
Venue:
GeBNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
40–48
Language:
URL:
https://aclanthology.org/W19-3806
DOI:
10.18653/v1/W19-3806
Bibkey:
Cite (ACL):
Saket Karve, Lyle Ungar, and João Sedoc. 2019. Conceptor Debiasing of Word Representations Evaluated on WEAT. In Proceedings of the First Workshop on Gender Bias in Natural Language Processing, pages 40–48, Florence, Italy. Association for Computational Linguistics.
Cite (Informal):
Conceptor Debiasing of Word Representations Evaluated on WEAT (Karve et al., GeBNLP 2019)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-2024-clasp/W19-3806.pdf