Efficiently Reusing Old Models Across Languages via Transfer Learning

Tom Kocmi, Ondřej Bojar


Abstract
Recent progress in neural machine translation (NMT) is directed towards larger neural networks trained on an increasing amount of hardware resources. As a result, NMT models are costly to train, both financially, due to the electricity and hardware cost, and environmentally, due to the carbon footprint. It is especially true in transfer learning for its additional cost of training the “parent” model before transferring knowledge and training the desired “child” model. In this paper, we propose a simple method of re-using an already trained model for different language pairs where there is no need for modifications in model architecture. Our approach does not need a separate parent model for each investigated language pair, as it is typical in NMT transfer learning. To show the applicability of our method, we recycle a Transformer model trained by different researchers and use it to seed models for different language pairs. We achieve better translation quality and shorter convergence times than when training from random initialization.
Anthology ID:
2020.eamt-1.3
Volume:
Proceedings of the 22nd Annual Conference of the European Association for Machine Translation
Month:
November
Year:
2020
Address:
Lisboa, Portugal
Venue:
EAMT
SIG:
Publisher:
European Association for Machine Translation
Note:
Pages:
19–28
Language:
URL:
https://aclanthology.org/2020.eamt-1.3
DOI:
Bibkey:
Cite (ACL):
Tom Kocmi and Ondřej Bojar. 2020. Efficiently Reusing Old Models Across Languages via Transfer Learning. In Proceedings of the 22nd Annual Conference of the European Association for Machine Translation, pages 19–28, Lisboa, Portugal. European Association for Machine Translation.
Cite (Informal):
Efficiently Reusing Old Models Across Languages via Transfer Learning (Kocmi & Bojar, EAMT 2020)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingestion-script-update/2020.eamt-1.3.pdf
Data
WMT 2018