Abstract
Recent Quality Estimation (QE) models based on multilingual pre-trained representations have achieved very competitive results in predicting the overall quality of translated sentences. However, detecting specifically which translated words are incorrect is a more challenging task, especially when dealing with limited amounts of training data. We hypothesize that, not unlike humans, successful QE models rely on translation errors to predict overall sentence quality. By exploring a set of feature attribution methods that assign relevance scores to the inputs to explain model predictions, we study the behaviour of state-of-the-art sentence-level QE models and show that explanations (i.e. rationales) extracted from these models can indeed be used to detect translation errors. We therefore (i) introduce a novel semi-supervised method for word-level QE; and (ii) propose to use the QE task as a new benchmark for evaluating the plausibility of feature attribution, i.e. how interpretable model explanations are to humans.- Anthology ID:
- 2022.findings-acl.327
- Volume:
- Findings of the Association for Computational Linguistics: ACL 2022
- Month:
- May
- Year:
- 2022
- Address:
- Dublin, Ireland
- Editors:
- Smaranda Muresan, Preslav Nakov, Aline Villavicencio
- Venue:
- Findings
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 4148–4159
- Language:
- URL:
- https://aclanthology.org/2022.findings-acl.327
- DOI:
- 10.18653/v1/2022.findings-acl.327
- Cite (ACL):
- Marina Fomicheva, Lucia Specia, and Nikolaos Aletras. 2022. Translation Error Detection as Rationale Extraction. In Findings of the Association for Computational Linguistics: ACL 2022, pages 4148–4159, Dublin, Ireland. Association for Computational Linguistics.
- Cite (Informal):
- Translation Error Detection as Rationale Extraction (Fomicheva et al., Findings 2022)
- PDF:
- https://preview.aclanthology.org/ingest-2024-clasp/2022.findings-acl.327.pdf
- Data
- MLQE-PE