Toxicity in Multilingual Machine Translation at Scale
Marta Costa-jussà, Eric Smith, Christophe Ropers, Daniel Licht, Jean Maillard, Javier Ferrando, Carlos Escolano
Abstract
Machine Translation systems can produce different types of errors, some of which are characterized as critical or catastrophic due to the specific negative impact that they can have on users. In this paper we focus on one type of critical error: added toxicity. We evaluate and analyze added toxicity when translating a large evaluation dataset (HOLISTICBIAS, over 472k sentences, covering 13 demographic axes) from English into 164 languages. An automatic toxicity evaluation shows that added toxicity across languages varies from 0% to 5%. The output languages with the most added toxicity tend to be low-resource ones, and the demographic axes with the most added toxicity include sexual orientation, gender and sex, and ability. We also perform human evaluation on a subset of 8 translation directions, confirming the prevalence of true added toxicity. We use a measurement of the amount of source contribution to the translation, where a low source contribution implies hallucination, to interpret what causes toxicity. Making use of the input attributions allows us to explain toxicity, because the source contributions significantly correlate with toxicity for 84% of languages studied. Given our findings, our recommendations to reduce added toxicity are to curate training data to avoid mistranslations, mitigate hallucination and check unstable translations.- Anthology ID:
- 2023.findings-emnlp.642
- Volume:
- Findings of the Association for Computational Linguistics: EMNLP 2023
- Month:
- December
- Year:
- 2023
- Address:
- Singapore
- Editors:
- Houda Bouamor, Juan Pino, Kalika Bali
- Venue:
- Findings
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 9570–9586
- Language:
- URL:
- https://preview.aclanthology.org/ingest_wac_2008/2023.findings-emnlp.642/
- DOI:
- 10.18653/v1/2023.findings-emnlp.642
- Cite (ACL):
- Marta Costa-jussà, Eric Smith, Christophe Ropers, Daniel Licht, Jean Maillard, Javier Ferrando, and Carlos Escolano. 2023. Toxicity in Multilingual Machine Translation at Scale. In Findings of the Association for Computational Linguistics: EMNLP 2023, pages 9570–9586, Singapore. Association for Computational Linguistics.
- Cite (Informal):
- Toxicity in Multilingual Machine Translation at Scale (Costa-jussà et al., Findings 2023)
- PDF:
- https://preview.aclanthology.org/ingest_wac_2008/2023.findings-emnlp.642.pdf