2023
pdf
abs
Steering Large Language Models for Machine Translation with Finetuning and In-Context Learning
Duarte Alves
|
Nuno Guerreiro
|
João Alves
|
José Pombal
|
Ricardo Rei
|
José de Souza
|
Pierre Colombo
|
Andre Martins
Findings of the Association for Computational Linguistics: EMNLP 2023
Large language models (LLMs) are a promising avenue for machine translation (MT). However, current LLM-based MT systems are brittle: their effectiveness highly depends on the choice of few-shot examples and they often require extra post-processing due to overgeneration. Alternatives such as finetuning on translation instructions are computationally expensive and may weaken in-context learning capabilities, due to overspecialization. In this paper, we provide a closer look at this problem. We start by showing that adapter-based finetuning with LoRA matches the performance of traditional finetuning while reducing the number of training parameters by a factor of 50. This method also outperforms few-shot prompting and eliminates the need for post-processing or in-context examples. However, we show that finetuning generally degrades few-shot performance, hindering adaptation capabilities. Finally, to obtain the best of both worlds, we propose a simple approach that incorporates few-shot examples during finetuning. Experiments on 10 language pairs show that our proposed approach recovers the original few-shot capabilities while keeping the added benefits of finetuning.
2022
pdf
abs
Findings of the WMT 2022 Shared Task on Quality Estimation
Chrysoula Zerva
|
Frédéric Blain
|
Ricardo Rei
|
Piyawat Lertvittayakumjorn
|
José G. C. de Souza
|
Steffen Eger
|
Diptesh Kanojia
|
Duarte Alves
|
Constantin Orăsan
|
Marina Fomicheva
|
André F. T. Martins
|
Lucia Specia
Proceedings of the Seventh Conference on Machine Translation (WMT)
We report the results of the WMT 2022 shared task on Quality Estimation, in which the challenge is to predict the quality of the output of neural machine translation systems at the word and sentence levels, without access to reference translations. This edition introduces a few novel aspects and extensions that aim to enable more fine-grained, and explainable quality estimation approaches. We introduce an updated quality annotation scheme using Multidimensional Quality Metrics to obtain sentence- and word-level quality scores for three language pairs. We also extend the Direct Assessments and post-edit data (MLQE-PE) to new language pairs: we present a novel and large dataset on English-Marathi, as well as a zero-shot test set on English-Yoruba. Further, we include an explainability sub-task for all language pairs and present a new format of a critical error detection task for two new language pairs. Participants from 11 different teams submitted altogether 991 systems to different task variants and language pairs.
pdf
abs
Robust MT Evaluation with Sentence-level Multilingual Augmentation
Duarte Alves
|
Ricardo Rei
|
Ana C Farinha
|
José G. C. de Souza
|
André F. T. Martins
Proceedings of the Seventh Conference on Machine Translation (WMT)
Automatic translations with critical errors may lead to misinterpretations and pose several risks for the user. As such, it is important that Machine Translation (MT) Evaluation systems are robust to these errors in order to increase the reliability and safety of Machine Translation systems. Here we introduce SMAUG a novel Sentence-level Multilingual AUGmentation approach for generating translations with critical errors and apply this approach to create a test set to evaluate the robustness of MT metrics to these errors. We show that current State-of-the-Art metrics are improving their capability to distinguish translations with and without critical errors and to penalize the first accordingly. We also show that metrics tend to struggle with errors related to named entities and numbers and that there is a high variance in the robustness of current methods to translations with critical errors.
pdf
abs
COMET-22: Unbabel-IST 2022 Submission for the Metrics Shared Task
Ricardo Rei
|
José G. C. de Souza
|
Duarte Alves
|
Chrysoula Zerva
|
Ana C Farinha
|
Taisiya Glushkova
|
Alon Lavie
|
Luisa Coheur
|
André F. T. Martins
Proceedings of the Seventh Conference on Machine Translation (WMT)
In this paper, we present the joint contribution of Unbabel and IST to the WMT 2022 Metrics Shared Task. Our primary submission – dubbed COMET-22 – is an ensemble between a COMET estimator model trained with Direct Assessments and a newly proposed multitask model trained to predict sentence-level scores along with OK/BAD word-level tags derived from Multidimensional Quality Metrics error annotations. These models are ensembled together using a hyper-parameter search that weights different features extracted from both evaluation models and combines them into a single score. For the reference-free evaluation, we present CometKiwi. Similarly to our primary submission, CometKiwi is an ensemble between two models. A traditional predictor-estimator model inspired by OpenKiwi and our new multitask model trained on Multidimensional Quality Metrics which can also be used without references. Both our submissions show improved correlations compared to state-of-the-art metrics from last year as well as increased robustness to critical errors.
pdf
abs
CometKiwi: IST-Unbabel 2022 Submission for the Quality Estimation Shared Task
Ricardo Rei
|
Marcos Treviso
|
Nuno M. Guerreiro
|
Chrysoula Zerva
|
Ana C Farinha
|
Christine Maroti
|
José G. C. de Souza
|
Taisiya Glushkova
|
Duarte Alves
|
Luisa Coheur
|
Alon Lavie
|
André F. T. Martins
Proceedings of the Seventh Conference on Machine Translation (WMT)
We present the joint contribution of IST and Unbabel to the WMT 2022 Shared Task on Quality Estimation (QE). Our team participated in all three subtasks: (i) Sentence and Word-level Quality Prediction; (ii) Explainable QE; and (iii) Critical Error Detection. For all tasks we build on top of the COMET framework, connecting it with the predictor-estimator architecture of OpenKiwi, and equipping it with a word-level sequence tagger and an explanation extractor. Our results suggest that incorporating references during pretraining improves performance across several language pairs on downstream tasks, and that jointly training with sentence and word-level objectives yields a further boost. Furthermore, combining attention and gradient information proved to be the top strategy for extracting good explanations of sentence-level QE models. Overall, our submissions achieved the best results for all three tasks for almost all language pairs by a considerable margin.