TUDa at WMT21: Sentence-Level Direct Assessment with Adapters

Gregor Geigle, Jonas Stadtmüller, Wei Zhao, Jonas Pfeiffer, Steffen Eger


Abstract
This paper presents our submissions to the WMT2021 Shared Task on Quality Estimation, Task 1 Sentence-Level Direct Assessment. While top-performing approaches utilize massively multilingual Transformer-based language models which have been pre-trained on all target languages of the task, the resulting insights are limited, as it is unclear how well the approach performs on languages unseen during pre-training; more problematically, these approaches do not provide any solutions for extending the model to new languages or unseen scripts—arguably one of the objectives of this shared task. In this work, we thus focus on utilizing massively multilingual language models which only partly cover the target languages during their pre-training phase. We extend the model to new languages and unseen scripts using recent adapter-based methods and achieve on par performance or even surpass models pre-trained on the respective languages.
Anthology ID:
2021.wmt-1.95
Volume:
Proceedings of the Sixth Conference on Machine Translation
Month:
November
Year:
2021
Address:
Online
Venue:
WMT
SIG:
SIGMT
Publisher:
Association for Computational Linguistics
Note:
Pages:
911–919
Language:
URL:
https://aclanthology.org/2021.wmt-1.95
DOI:
Bibkey:
Cite (ACL):
Gregor Geigle, Jonas Stadtmüller, Wei Zhao, Jonas Pfeiffer, and Steffen Eger. 2021. TUDa at WMT21: Sentence-Level Direct Assessment with Adapters. In Proceedings of the Sixth Conference on Machine Translation, pages 911–919, Online. Association for Computational Linguistics.
Cite (Informal):
TUDa at WMT21: Sentence-Level Direct Assessment with Adapters (Geigle et al., WMT 2021)
Copy Citation:
PDF:
https://preview.aclanthology.org/remove-xml-comments/2021.wmt-1.95.pdf