Abstract
Natural Language Processing (NLP), through its several applications, has been considered as one of the most valuable field in interdisciplinary researches, as well as in computer science. However, it is not without its flaws. One of the most common flaws is bias. This paper examines the main linguistic challenges of Inuktitut, an indigenous language of Canada, and focuses on gender bias identification and mitigation. We explore the unique characteristics of this language to help us understand the right techniques that can be used to identify and mitigate implicit biases. We use some methods to quantify the gender bias existing in Inuktitut word embeddings; then we proceed to mitigate the bias and evaluate the performance of the debiased embeddings. Next, we explain how approaches for detecting and reducing bias in English embeddings may be transferred to Inuktitut embeddings by properly taking into account the language’s particular characteristics. Next, we compare the effect of the debiasing techniques on Inuktitut and English. Finally, we highlight some future research directions which will further help to push the boundaries.- Anthology ID:
- 2022.gebnlp-1.25
- Volume:
- Proceedings of the 4th Workshop on Gender Bias in Natural Language Processing (GeBNLP)
- Month:
- July
- Year:
- 2022
- Address:
- Seattle, Washington
- Editors:
- Christian Hardmeier, Christine Basta, Marta R. Costa-jussà, Gabriel Stanovsky, Hila Gonen
- Venue:
- GeBNLP
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 244–254
- Language:
- URL:
- https://aclanthology.org/2022.gebnlp-1.25
- DOI:
- 10.18653/v1/2022.gebnlp-1.25
- Cite (ACL):
- Oussama Hansal, Ngoc Tan Le, and Fatiha Sadat. 2022. Indigenous Language Revitalization and the Dilemma of Gender Bias. In Proceedings of the 4th Workshop on Gender Bias in Natural Language Processing (GeBNLP), pages 244–254, Seattle, Washington. Association for Computational Linguistics.
- Cite (Informal):
- Indigenous Language Revitalization and the Dilemma of Gender Bias (Hansal et al., GeBNLP 2022)
- PDF:
- https://preview.aclanthology.org/dois-2013-emnlp/2022.gebnlp-1.25.pdf