Wei-Rui Chen
Also published as: Wei-rui Chen
2023
Improving Neural Machine Translation of Indigenous Languages with Multilingual Transfer Learning
Wei-rui Chen
|
Muhammad Abdul-mageed
Proceedings of the Sixth Workshop on Technologies for Machine Translation of Low-Resource Languages (LoResMT 2023)
Machine translation (MT) involving Indigenous languages, including endangered ones, is challenging primarily due to lack of sufficient parallel data. We describe an approach exploiting bilingual and multilingual pretrained MT models in a transfer learning setting to translate from Spanish into ten South American Indigenous languages. Our models set new SOTA on five out of the ten language pairs we consider, even doubling performance on one of these five pairs. Unlike previous SOTA that perform data augmentation to enlarge the train sets, we retain the low-resource setting to test the effectiveness of our models under such a constraint. In spite of the rarity of linguistic information available about the Indigenous languages, we offer a number of quantitative and qualitative analyses (e.g., as to morphology, tokenization, and orthography) to contextualize our results.
2021
Machine Translation of Low-Resource Indo-European Languages
Wei-Rui Chen
|
Muhammad Abdul-Mageed
Proceedings of the Sixth Conference on Machine Translation
In this work, we investigate methods for the challenging task of translating between low- resource language pairs that exhibit some level of similarity. In particular, we consider the utility of transfer learning for translating between several Indo-European low-resource languages from the Germanic and Romance language families. In particular, we build two main classes of transfer-based systems to study how relatedness can benefit the translation performance. The primary system fine-tunes a model pre-trained on a related language pair and the contrastive system fine-tunes one pre-trained on an unrelated language pair. Our experiments show that although relatedness is not necessary for transfer learning to work, it does benefit model performance.
IndT5: A Text-to-Text Transformer for 10 Indigenous Languages
El Moatez Billah Nagoudi
|
Wei-Rui Chen
|
Muhammad Abdul-Mageed
|
Hasan Cavusoglu
Proceedings of the First Workshop on Natural Language Processing for Indigenous Languages of the Americas
Transformer language models have become fundamental components of NLP based pipelines. Although several Transformer have been introduced to serve many languages, there is a shortage of models pre-trained for low-resource and Indigenous languages in particular. In this work, we introduce IndT5, the first Transformer language model for Indigenous languages. To train IndT5, we build IndCorpus, a new corpus for 10 Indigenous languages and Spanish. We also present the application of IndT5 to machine translation by investigating different approaches to translate between Spanish and the Indigenous languages as part of our contribution to the AmericasNLP 2021 Shared Task on Open Machine Translation. IndT5 and IndCorpus are publicly available for research.
Search