Abstract
Pre-trained word embeddings, such as GloVe, have shown undesirable gender, racial, and religious biases. To address this problem, we propose DD-GloVe, a train-time debiasing algorithm to learn word embeddings by leveraging ̲dictionary ̲definitions. We introduce dictionary-guided loss functions that encourage word embeddings to be similar to their relatively neutral dictionary definition representations. Existing debiasing algorithms typically need a pre-compiled list of seed words to represent the bias direction, along which biased information gets removed. Producing this list involves subjective decisions and it might be difficult to obtain for some types of biases. We automate the process of finding seed words: our algorithm starts from a single pair of initial seed words and automatically finds more words whose definitions display similar attributes traits. We demonstrate the effectiveness of our approach with benchmark evaluations and empirical analyses. Our code is available at https://github.com/haozhe-an/DD-GloVe.- Anthology ID:
- 2022.findings-acl.90
- Volume:
- Findings of the Association for Computational Linguistics: ACL 2022
- Month:
- May
- Year:
- 2022
- Address:
- Dublin, Ireland
- Venue:
- Findings
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 1139–1152
- Language:
- URL:
- https://aclanthology.org/2022.findings-acl.90
- DOI:
- 10.18653/v1/2022.findings-acl.90
- Cite (ACL):
- Haozhe An, Xiaojiang Liu, and Donald Zhang. 2022. Learning Bias-reduced Word Embeddings Using Dictionary Definitions. In Findings of the Association for Computational Linguistics: ACL 2022, pages 1139–1152, Dublin, Ireland. Association for Computational Linguistics.
- Cite (Informal):
- Learning Bias-reduced Word Embeddings Using Dictionary Definitions (An et al., Findings 2022)
- PDF:
- https://preview.aclanthology.org/paclic-22-ingestion/2022.findings-acl.90.pdf
- Code
- haozhe-an/dd-glove