2020
pdf
bib
Proceedings of the 3rd Workshop on Technologies for MT of Low Resource Languages
Alina Karakanta
|
Atul Kr. Ojha
|
Chao-Hong Liu
|
Jade Abbott
|
John Ortega
|
Jonathan Washington
|
Nathaniel Oco
|
Surafel Melaku Lakew
|
Tommi A Pirinen
|
Valentin Malykh
|
Varvara Logacheva
|
Xiaobing Zhao
Proceedings of the 3rd Workshop on Technologies for MT of Low Resource Languages
2019
pdf
bib
Proceedings of the 2nd Workshop on Technologies for MT of Low Resource Languages
Alina Karakanta
|
Atul Kr. Ojha
|
Chao-Hong Liu
|
Jonathan Washington
|
Nathaniel Oco
|
Surafel Melaku Lakew
|
Valentin Malykh
|
Xiaobing Zhao
Proceedings of the 2nd Workshop on Technologies for MT of Low Resource Languages
pdf
abs
Controlling the Output Length of Neural Machine Translation
Surafel Melaku Lakew
|
Mattia Di Gangi
|
Marcello Federico
Proceedings of the 16th International Conference on Spoken Language Translation
The recent advances introduced by neural machine translation (NMT) are rapidly expanding the application fields of machine translation, as well as reshaping the quality level to be targeted. In particular, if translations have to fit some given layout, quality should not only be measured in terms of adequacy and fluency, but also length. Exemplary cases are the translation of document files, subtitles, and scripts for dubbing, where the output length should ideally be as close as possible to the length of the input text. This pa-per addresses for the first time, to the best of our knowledge, the problem of controlling the output length in NMT. We investigate two methods for biasing the output length with a transformer architecture: i) conditioning the output to a given target-source length-ratio class and ii) enriching the transformer positional embedding with length information. Our experiments show that both methods can induce the network to generate shorter translations, as well as acquiring inter- pretable linguistic skills.
2018
pdf
abs
Neural Machine Translation into Language Varieties
Surafel Melaku Lakew
|
Aliia Erofeeva
|
Marcello Federico
Proceedings of the Third Conference on Machine Translation: Research Papers
Both research and commercial machine translation have so far neglected the importance of properly handling the spelling, lexical and grammar divergences occurring among language varieties. Notable cases are standard national varieties such as Brazilian and European Portuguese, and Canadian and European French, which popular online machine translation services are not keeping distinct. We show that an evident side effect of modeling such varieties as unique classes is the generation of inconsistent translations. In this work, we investigate the problem of training neural machine translation from English to specific pairs of language varieties, assuming both labeled and unlabeled parallel texts, and low-resource conditions. We report experiments from English to two pairs of dialects, European-Brazilian Portuguese and European-Canadian French, and two pairs of standardized varieties, Croatian-Serbian and Indonesian-Malay. We show significant BLEU score improvements over baseline systems when translation into similar languages is learned as a multilingual task with shared representations.
pdf
abs
A Comparison of Transformer and Recurrent Neural Networks on Multilingual Neural Machine Translation
Surafel Melaku Lakew
|
Mauro Cettolo
|
Marcello Federico
Proceedings of the 27th International Conference on Computational Linguistics
Recently, neural machine translation (NMT) has been extended to multilinguality, that is to handle more than one translation direction with a single system. Multilingual NMT showed competitive performance against pure bilingual systems. Notably, in low-resource settings, it proved to work effectively and efficiently, thanks to shared representation space that is forced across languages and induces a sort of transfer-learning. Furthermore, multilingual NMT enables so-called zero-shot inference across language pairs never seen at training time. Despite the increasing interest in this framework, an in-depth analysis of what a multilingual NMT model is capable of and what it is not is still missing. Motivated by this, our work (i) provides a quantitative and comparative analysis of the translations produced by bilingual, multilingual and zero-shot systems; (ii) investigates the translation quality of two of the currently dominant neural architectures in MT, which are the Recurrent and the Transformer ones; and (iii) quantitatively explores how the closeness between languages influences the zero-shot translation. Our analysis leverages multiple professional post-edits of automatic translations by several different systems and focuses both on automatic standard metrics (BLEU and TER) and on widely used error categories, which are lexical, morphology, and word order errors.