Investigating Multi-Pivot Ensembling with Massively Multilingual Machine Translation Models

Alireza Mohammadshahi, Jannis Vamvas, Rico Sennrich


Abstract
Massively multilingual machine translation models allow for the translation of a large number of languages with a single model, but have limited performance on low- and very-low-resource translation directions. Pivoting via high-resource languages remains a strong strategy for low-resource directions, and in this paper we revisit ways of pivoting through multiple languages. Previous work has used a simple averaging of probability distributions from multiple paths, but we find that this performs worse than using a single pivot, and exacerbates the hallucination problem because the same hallucinations can be probable across different paths. We also propose MaxEns, a novel combination strategy that makes the output biased towards the most confident predictions, hypothesising that confident predictions are less prone to be hallucinations. We evaluate different strategies on the FLORES benchmark for 20 low-resource language directions, demonstrating that MaxEns improves translation quality for low-resource languages while reducing hallucination in translations, compared to both direct translation and an averaging approach. On average, multi-pivot strategies still lag behind using English as a single pivot language, raising the question of how to identify the best pivoting strategy for a given translation direction.
Anthology ID:
2024.insights-1.19
Volume:
Proceedings of the Fifth Workshop on Insights from Negative Results in NLP
Month:
June
Year:
2024
Address:
Mexico City, Mexico
Editors:
Shabnam Tafreshi, Arjun Akula, João Sedoc, Aleksandr Drozd, Anna Rogers, Anna Rumshisky
Venues:
insights | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
169–180
Language:
URL:
https://preview.aclanthology.org/build-pipeline-with-new-library/2024.insights-1.19/
DOI:
10.18653/v1/2024.insights-1.19
Bibkey:
Cite (ACL):
Alireza Mohammadshahi, Jannis Vamvas, and Rico Sennrich. 2024. Investigating Multi-Pivot Ensembling with Massively Multilingual Machine Translation Models. In Proceedings of the Fifth Workshop on Insights from Negative Results in NLP, pages 169–180, Mexico City, Mexico. Association for Computational Linguistics.
Cite (Informal):
Investigating Multi-Pivot Ensembling with Massively Multilingual Machine Translation Models (Mohammadshahi et al., insights 2024)
Copy Citation:
PDF:
https://preview.aclanthology.org/build-pipeline-with-new-library/2024.insights-1.19.pdf