Efficient Architectures For Low-Resource Machine Translation

Edoardo Signoroni, Pavel Rychly, Ruggero Signoroni


Abstract
Low-resource Neural Machine Translation is highly sensitive to hyperparameters and needs careful tuning to achieve the best results with small amounts of training data. We focus on exploring the impact of changes in the Transformer architecture on downstream translation quality, and propose a metric to score the computational efficiency of such changes. By experimenting on English-Akkadian, German-Lower Sorbian, English-Italian, and English-Manipuri, we confirm previous finding in low-resource machine translation optimization, and show that smaller and more parameter-efficient models can achieve the same translation quality of larger and unwieldy ones at a fraction of the computational cost. Optimized models have around 95% less parameters, while dropping only up to 14.8% ChrF. We compile a list of optimal ranges for each hyperparameter.
Anthology ID:
2025.lowresnlp-1.6
Volume:
Proceedings of the First Workshop on Advancing NLP for Low-Resource Languages
Month:
September
Year:
2025
Address:
Varna, Bulgaria
Editors:
Ernesto Luis Estevanell-Valladares, Alicia Picazo-Izquierdo, Tharindu Ranasinghe, Besik Mikaberidze, Simon Ostermann, Daniil Gurgurov, Philipp Mueller, Claudia Borg, Marián Šimko
Venues:
LowResNLP | WS
SIG:
Publisher:
INCOMA Ltd., Shoumen, Bulgaria
Note:
Pages:
39–64
Language:
URL:
https://preview.aclanthology.org/corrections-2026-01/2025.lowresnlp-1.6/
DOI:
Bibkey:
Cite (ACL):
Edoardo Signoroni, Pavel Rychly, and Ruggero Signoroni. 2025. Efficient Architectures For Low-Resource Machine Translation. In Proceedings of the First Workshop on Advancing NLP for Low-Resource Languages, pages 39–64, Varna, Bulgaria. INCOMA Ltd., Shoumen, Bulgaria.
Cite (Informal):
Efficient Architectures For Low-Resource Machine Translation (Signoroni et al., LowResNLP 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/corrections-2026-01/2025.lowresnlp-1.6.pdf