Jonas Stadtmüller
2021
TUDa at WMT21: Sentence-Level Direct Assessment with Adapters
Gregor Geigle
|
Jonas Stadtmüller
|
Wei Zhao
|
Jonas Pfeiffer
|
Steffen Eger
Proceedings of the Sixth Conference on Machine Translation
This paper presents our submissions to the WMT2021 Shared Task on Quality Estimation, Task 1 Sentence-Level Direct Assessment. While top-performing approaches utilize massively multilingual Transformer-based language models which have been pre-trained on all target languages of the task, the resulting insights are limited, as it is unclear how well the approach performs on languages unseen during pre-training; more problematically, these approaches do not provide any solutions for extending the model to new languages or unseen scripts—arguably one of the objectives of this shared task. In this work, we thus focus on utilizing massively multilingual language models which only partly cover the target languages during their pre-training phase. We extend the model to new languages and unseen scripts using recent adapter-based methods and achieve on par performance or even surpass models pre-trained on the respective languages.