Abstract
This paper investigates the presence of gender bias in pretrained Swedish embeddings. We focus on a scenario where names are matched with occupations, and we demonstrate how a number of standard pretrained embeddings handle this task. Our experiments show some significant differences between the pretrained embeddings, with word-based methods showing the most bias and contextualized language models showing the least. We also demonstrate that the previously proposed debiasing method does not affect the performance of the various embeddings in this scenario.- Anthology ID:
- W19-6104
- Volume:
- Proceedings of the 22nd Nordic Conference on Computational Linguistics
- Month:
- September–October
- Year:
- 2019
- Address:
- Turku, Finland
- Editors:
- Mareike Hartmann, Barbara Plank
- Venue:
- NoDaLiDa
- SIG:
- Publisher:
- Linköping University Electronic Press
- Note:
- Pages:
- 35–43
- Language:
- URL:
- https://aclanthology.org/W19-6104
- DOI:
- Cite (ACL):
- Magnus Sahlgren and Fredrik Olsson. 2019. Gender Bias in Pretrained Swedish Embeddings. In Proceedings of the 22nd Nordic Conference on Computational Linguistics, pages 35–43, Turku, Finland. Linköping University Electronic Press.
- Cite (Informal):
- Gender Bias in Pretrained Swedish Embeddings (Sahlgren & Olsson, NoDaLiDa 2019)
- PDF:
- https://preview.aclanthology.org/nschneid-patch-3/W19-6104.pdf