Monolingual Adapters for Zero-Shot Neural Machine Translation

Jerin Philip, Alexandre Berard, Matthias Gallé, Laurent Besacier


Abstract
We propose a novel adapter layer formalism for adapting multilingual models. They are more parameter-efficient than existing adapter layers while obtaining as good or better performance. The layers are specific to one language (as opposed to bilingual adapters) allowing to compose them and generalize to unseen language-pairs. In this zero-shot setting, they obtain a median improvement of +2.77 BLEU points over a strong 20-language multilingual Transformer baseline trained on TED talks.
Anthology ID:
2020.emnlp-main.361
Volume:
Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)
Month:
November
Year:
2020
Address:
Online
Editors:
Bonnie Webber, Trevor Cohn, Yulan He, Yang Liu
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
4465–4470
Language:
URL:
https://aclanthology.org/2020.emnlp-main.361
DOI:
10.18653/v1/2020.emnlp-main.361
Bibkey:
Cite (ACL):
Jerin Philip, Alexandre Berard, Matthias Gallé, and Laurent Besacier. 2020. Monolingual Adapters for Zero-Shot Neural Machine Translation. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 4465–4470, Online. Association for Computational Linguistics.
Cite (Informal):
Monolingual Adapters for Zero-Shot Neural Machine Translation (Philip et al., EMNLP 2020)
Copy Citation:
PDF:
https://preview.aclanthology.org/add_acl24_videos/2020.emnlp-main.361.pdf
Video:
 https://slideslive.com/38939110