2022
pdf
abs
Teaching Unseen Low-resource Languages to Large Translation Models
Maali Tars
|
Taido Purason
|
Andre Tättar
Proceedings of the Seventh Conference on Machine Translation (WMT)
In recent years, large multilingual pre-trained neural machine translation model research has grown and it is common for these models to be publicly available for usage and fine-tuning. Low-resource languages benefit from the pre-trained models, because of knowledge transfer from high- to medium-resource languages. The recently available M2M-100 model is our starting point for cross-lingual transfer learning to Finno-Ugric languages, like Livonian. We participate in the WMT22 General Machine Translation task, where we focus on the English-Livonian language pair. We leverage data from other Finno-Ugric languages and through that, we achieve high scores for English-Livonian translation directions. Overall, instead of training a model from scratch, we use transfer learning and back-translation as the main methods and fine-tune a publicly available pre-trained model. This in turn reduces the cost and duration of training high-quality multilingual neural machine translation models.
pdf
abs
Multilingual Neural Machine Translation With the Right Amount of Sharing
Taido Purason
|
Andre Tättar
Proceedings of the 23rd Annual Conference of the European Association for Machine Translation
Large multilingual Transformer-based machine translation models have had a pivotal role in making translation systems available for hundreds of languages with good zero-shot translation performance. One such example is the universal model with shared encoder-decoder architecture. Additionally, jointly trained language-specific encoder-decoder systems have been proposed for multilingual neural machine translation (NMT) models. This work investigates various knowledge-sharing approaches on the encoder side while keeping the decoder language- or language-group-specific. We propose a novel approach, where we use universal, language-group-specific and language-specific modules to solve the shortcomings of both the universal models and models with language-specific encoders-decoders. Experiments on a multilingual dataset set up to model real-world scenarios, including zero-shot and low-resource translation, show that our proposed models achieve higher translation quality compared to purely universal and language-specific approaches.
pdf
abs
MTee: Open Machine Translation Platform for Estonian Government
Toms Bergmanis
|
Marcis Pinnis
|
Roberts Rozis
|
Jānis Šlapiņš
|
Valters Šics
|
Berta Bernāne
|
Guntars Pužulis
|
Endijs Titomers
|
Andre Tättar
|
Taido Purason
|
Hele-Andra Kuulmets
|
Agnes Luhtaru
|
Liisa Rätsep
|
Maali Tars
|
Annika Laumets-Tättar
|
Mark Fishel
Proceedings of the 23rd Annual Conference of the European Association for Machine Translation
We present the MTee project - a research initiative funded via an Estonian public procurement to develop machine translation technology that is open-source and free of charge. The MTee project delivered an open-source platform serving state-of-the-art machine translation systems supporting four domains for six language pairs translating from Estonian into English, German, and Russian and vice-versa. The platform also features grammatical error correction and speech translation for Estonian and allows for formatted document translation and automatic domain detection. The software, data and training workflows for machine translation engines are all made publicly available for further use and research.