Setting up the Data Printer with Improved English to Ukrainian Machine Translation

Yurii Paniv, Dmytro Chaplynskyi, Nikita Trynus, Volodymyr Kyrylov


Abstract
To build large language models for Ukrainian we need to expand our corpora with large amounts of new algorithmic tasks expressed in natural language. Examples of task performance expressed in English are abundant, so with a high-quality translation system our community will be enabled to curate datasets faster. To aid this goal, we introduce a recipe to build a translation system using supervised finetuning of a large pretrained language model with a noisy parallel dataset of 3M pairs of Ukrainian and English sentences followed by a second phase of training using 17K examples selected by k-fold perplexity filtering on another dataset of higher quality. Our decoder-only model named Dragoman beats performance of previous state of the art encoder-decoder models on the FLORES devtest set.
Anthology ID:
2024.unlp-1.6
Volume:
Proceedings of the Third Ukrainian Natural Language Processing Workshop (UNLP) @ LREC-COLING 2024
Month:
May
Year:
2024
Address:
Torino, Italia
Editors:
Mariana Romanyshyn, Nataliia Romanyshyn, Andrii Hlybovets, Oleksii Ignatenko
Venue:
UNLP
SIG:
Publisher:
ELRA and ICCL
Note:
Pages:
41–50
Language:
URL:
https://aclanthology.org/2024.unlp-1.6
DOI:
Bibkey:
Cite (ACL):
Yurii Paniv, Dmytro Chaplynskyi, Nikita Trynus, and Volodymyr Kyrylov. 2024. Setting up the Data Printer with Improved English to Ukrainian Machine Translation. In Proceedings of the Third Ukrainian Natural Language Processing Workshop (UNLP) @ LREC-COLING 2024, pages 41–50, Torino, Italia. ELRA and ICCL.
Cite (Informal):
Setting up the Data Printer with Improved English to Ukrainian Machine Translation (Paniv et al., UNLP 2024)
Copy Citation:
PDF:
https://preview.aclanthology.org/landing_page/2024.unlp-1.6.pdf