Strategies for Adapting Multilingual Pre-training for Domain-Specific Machine Translation

Neha Verma, Kenton Murray, Kevin Duh


Abstract
Pretrained multilingual sequence-to-sequence models have been successful in improving translation performance for mid- and lower-resourced languages. However, it is unclear if these models are helpful in the domain adaptation setting, and if so, how to best adapt them to both the domain and translation language pair. Therefore, in this work, we propose two major fine-tuning strategies: our language-first approach first learns the translation language pair via general bitext, followed by the domain via in-domain bitext, and our domain-first approach first learns the domain via multilingual in-domain bitext, followed by the language pair via language pair-specific in-domain bitext. We test our approach on 3 domains at different levels of data availability, and 5 language pairs. We find that models using an mBART initialization generally outperform those using a random Transformer initialization. This holds for languages even outside of mBART’s pretraining set, and can result in improvements of over +10 BLEU. Additionally, we find that via our domain-first approach, fine-tuning across multilingual in-domain corpora can lead to stark improvements in domain adaptation without sourcing additional out-of-domain bitext. In larger domain availability settings, our domain-first approach can be competitive with our language-first approach, even when using over 50X less data.
Anthology ID:
2022.amta-research.3
Volume:
Proceedings of the 15th biennial conference of the Association for Machine Translation in the Americas (Volume 1: Research Track)
Month:
September
Year:
2022
Address:
Orlando, USA
Venue:
AMTA
SIG:
Publisher:
Association for Machine Translation in the Americas
Note:
Pages:
31–44
Language:
URL:
https://aclanthology.org/2022.amta-research.3
DOI:
Bibkey:
Cite (ACL):
Neha Verma, Kenton Murray, and Kevin Duh. 2022. Strategies for Adapting Multilingual Pre-training for Domain-Specific Machine Translation. In Proceedings of the 15th biennial conference of the Association for Machine Translation in the Americas (Volume 1: Research Track), pages 31–44, Orlando, USA. Association for Machine Translation in the Americas.
Cite (Informal):
Strategies for Adapting Multilingual Pre-training for Domain-Specific Machine Translation (Verma et al., AMTA 2022)
Copy Citation:
PDF:
https://preview.aclanthology.org/starsem-semeval-split/2022.amta-research.3.pdf