Jonathan Eng


2025

pdf bib
SMOL: Professionally Translated Parallel Data for 115 Under-represented Languages
Isaac Caswell | Elizabeth Nielsen | Jiaming Luo | Colin Cherry | Geza Kovacs | Hadar Shemtov | Partha Talukdar | Dinesh Tewari | Moussa Doumbouya | Djibrila Diane | Baba Mamadi Diane | Solo Farabado | Edoardo Ferrante | Alessandro Guasoni | Mamadou Keita | Sudhamoy Debbarma | Ali Kuzhuget | David Anugraha | Muhammad Ravi Shulthan Habibi | Sina Ahmadi | Mingfei Liu | Jonathan Eng
Proceedings of the Tenth Conference on Machine Translation

We open-source SMOL(Set of Maximal Over-all Leverage), a suite of training data to un-lock machine translation for low-resource languages (LRLs). SMOL has been translated into123 under-resourced languages (125 language pairs), including many for which there exist no previous public resources, for a total of 6.1M translated tokens. SMOL comprises two sub-datasets, each carefully chosen for maximum impact given its size: SMOLSENT, a set of sentences chosen for broad unique token coverage, and SMOLDOC, a document-level source focusing on a broad topic coverage. They join the already released GATITOS for a trifecta of paragraph, sentence, and token-level content. We demonstrate that using SMOL to prompt or fine-tune Large Language Models yields robust chrF improvements. In addition to translation, we provide factuality ratings and rationales for all documents in SMOLDOC, yielding the first factuality datasets for most of these languages.