Abstract
Generating machine translations via beam search seeks the most likely output under a model. However, beam search has been shown to amplify demographic biases exhibited by a model. We aim to address this, focusing on gender bias resulting from systematic errors in grammatical gender translation. Almost all prior work on this problem adjusts the training data or the model itself. By contrast, our approach changes only the inference procedure. We constrain beam search to improve gender diversity in n-best lists, and rerank n-best lists using gender features obtained from the source sentence. Combining these strongly improves WinoMT gender translation accuracy for three language pairs without additional bilingual data or retraining. We also demonstrate our approach’s utility for consistently gendering named entities, and its flexibility to handle new gendered language beyond the binary.- Anthology ID:
- 2022.findings-acl.301
- Volume:
- Findings of the Association for Computational Linguistics: ACL 2022
- Month:
- May
- Year:
- 2022
- Address:
- Dublin, Ireland
- Editors:
- Smaranda Muresan, Preslav Nakov, Aline Villavicencio
- Venue:
- Findings
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 3814–3823
- Language:
- URL:
- https://aclanthology.org/2022.findings-acl.301
- DOI:
- 10.18653/v1/2022.findings-acl.301
- Cite (ACL):
- Danielle Saunders, Rosie Sallis, and Bill Byrne. 2022. First the Worst: Finding Better Gender Translations During Beam Search. In Findings of the Association for Computational Linguistics: ACL 2022, pages 3814–3823, Dublin, Ireland. Association for Computational Linguistics.
- Cite (Informal):
- First the Worst: Finding Better Gender Translations During Beam Search (Saunders et al., Findings 2022)
- PDF:
- https://preview.aclanthology.org/naacl-24-ws-corrections/2022.findings-acl.301.pdf
- Code
- dcsaunders/nmt-gender-rerank