Boosting Unsupervised Machine Translation with Pseudo-Parallel Data

Ivana Kvapilíková, Ondřej Bojar


Abstract
Even with the latest developments in deep learning and large-scale language modeling, the task of machine translation (MT) of low-resource languages remains a challenge. Neural MT systems can be trained in an unsupervised way without any translation resources but the quality lags behind, especially in truly low-resource conditions. We propose a training strategy that relies on pseudo-parallel sentence pairs mined from monolingual corpora in addition to synthetic sentence pairs back-translated from monolingual corpora. We experiment with different training schedules and reach an improvement of up to 14.5 BLEU points (English to Ukrainian) over a baseline trained on back-translated data only.
Anthology ID:
2023.mtsummit-research.12
Volume:
Proceedings of Machine Translation Summit XIX, Vol. 1: Research Track
Month:
September
Year:
2023
Address:
Macau SAR, China
Editors:
Masao Utiyama, Rui Wang
Venue:
MTSummit
SIG:
Publisher:
Asia-Pacific Association for Machine Translation
Note:
Pages:
135–147
Language:
URL:
https://aclanthology.org/2023.mtsummit-research.12
DOI:
Bibkey:
Cite (ACL):
Ivana Kvapilíková and Ondřej Bojar. 2023. Boosting Unsupervised Machine Translation with Pseudo-Parallel Data. In Proceedings of Machine Translation Summit XIX, Vol. 1: Research Track, pages 135–147, Macau SAR, China. Asia-Pacific Association for Machine Translation.
Cite (Informal):
Boosting Unsupervised Machine Translation with Pseudo-Parallel Data (Kvapilíková & Bojar, MTSummit 2023)
Copy Citation:
PDF:
https://preview.aclanthology.org/naacl24-info/2023.mtsummit-research.12.pdf