Nuno M. Guerreiro
2022
CometKiwi: IST-Unbabel 2022 Submission for the Quality Estimation Shared Task
Ricardo Rei
|
Marcos Treviso
|
Nuno M. Guerreiro
|
Chrysoula Zerva
|
Ana C Farinha
|
Christine Maroti
|
José G. C. de Souza
|
Taisiya Glushkova
|
Duarte Alves
|
Luisa Coheur
|
Alon Lavie
|
André F. T. Martins
Proceedings of the Seventh Conference on Machine Translation (WMT)
We present the joint contribution of IST and Unbabel to the WMT 2022 Shared Task on Quality Estimation (QE). Our team participated in all three subtasks: (i) Sentence and Word-level Quality Prediction; (ii) Explainable QE; and (iii) Critical Error Detection. For all tasks we build on top of the COMET framework, connecting it with the predictor-estimator architecture of OpenKiwi, and equipping it with a word-level sequence tagger and an explanation extractor. Our results suggest that incorporating references during pretraining improves performance across several language pairs on downstream tasks, and that jointly training with sentence and word-level objectives yields a further boost. Furthermore, combining attention and gradient information proved to be the top strategy for extracting good explanations of sentence-level QE models. Overall, our submissions achieved the best results for all three tasks for almost all language pairs by a considerable margin.
2021
IST-Unbabel 2021 Submission for the Explainable Quality Estimation Shared Task
Marcos Treviso
|
Nuno M. Guerreiro
|
Ricardo Rei
|
André F. T. Martins
Proceedings of the 2nd Workshop on Evaluation and Comparison of NLP Systems
We present the joint contribution of Instituto Superior Técnico (IST) and Unbabel to the Explainable Quality Estimation (QE) shared task, where systems were submitted to two tracks: constrained (without word-level supervision) and unconstrained (with word-level supervision). For the constrained track, we experimented with several explainability methods to extract the relevance of input tokens from sentence-level QE models built on top of multilingual pre-trained transformers. Among the different tested methods, composing explanations in the form of attention weights scaled by the norm of value vectors yielded the best results. When word-level labels are used during training, our best results were obtained by using word-level predicted probabilities. We further improve the performance of our methods on the two tracks by ensembling explanation scores extracted from models trained with different pre-trained transformers, achieving strong results for in-domain and zero-shot language pairs.
SPECTRA: Sparse Structured Text Rationalization
Nuno M. Guerreiro
|
André F. T. Martins
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing
Selective rationalization aims to produce decisions along with rationales (e.g., text highlights or word alignments between two sentences). Commonly, rationales are modeled as stochastic binary masks, requiring sampling-based gradient estimators, which complicates training and requires careful hyperparameter tuning. Sparse attention mechanisms are a deterministic alternative, but they lack a way to regularize the rationale extraction (e.g., to control the sparsity of a text highlight or the number of alignments). In this paper, we present a unified framework for deterministic extraction of structured explanations via constrained inference on a factor graph, forming a differentiable layer. Our approach greatly eases training and rationale regularization, generally outperforming previous work on what comes to performance and plausibility of the extracted rationales. We further provide a comparative study of stochastic and deterministic methods for rationale extraction for classification and natural language inference tasks, jointly assessing their predictive power, quality of the explanations, and model variability.
Search
Co-authors
- André F. T. Martins 3
- Marcos Treviso 2
- Ricardo Rei 2
- Chrysoula Zerva 1
- Ana C. Farinha 1
- show all...