Disentangling Uncertainty in Machine Translation Evaluation
Chrysoula Zerva, Taisiya Glushkova, Ricardo Rei, André F. T. Martins
Abstract
Trainable evaluation metrics for machine translation (MT) exhibit strong correlation with human judgements, but they are often hard to interpret and might produce unreliable scores under noisy or out-of-domain data. Recent work has attempted to mitigate this with simple uncertainty quantification techniques (Monte Carlo dropout and deep ensembles), however these techniques (as we show) are limited in several ways – for example, they are unable to distinguish between different kinds of uncertainty, and they are time and memory consuming. In this paper, we propose more powerful and efficient uncertainty predictors for MT evaluation, and we assess their ability to target different sources of aleatoric and epistemic uncertainty. To this end, we develop and compare training objectives for the COMET metric to enhance it with an uncertainty prediction output, including heteroscedastic regression, divergence minimization, and direct uncertainty prediction. Our experiments show improved results on uncertainty prediction for the WMT metrics task datasets, with a substantial reduction in computational costs. Moreover, they demonstrate the ability of these predictors to address specific uncertainty causes in MT evaluation, such as low quality references and out-of-domain data.- Anthology ID:
- 2022.emnlp-main.591
- Volume:
- Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
- Month:
- December
- Year:
- 2022
- Address:
- Abu Dhabi, United Arab Emirates
- Editors:
- Yoav Goldberg, Zornitsa Kozareva, Yue Zhang
- Venue:
- EMNLP
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 8622–8641
- Language:
- URL:
- https://aclanthology.org/2022.emnlp-main.591
- DOI:
- 10.18653/v1/2022.emnlp-main.591
- Cite (ACL):
- Chrysoula Zerva, Taisiya Glushkova, Ricardo Rei, and André F. T. Martins. 2022. Disentangling Uncertainty in Machine Translation Evaluation. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 8622–8641, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
- Cite (Informal):
- Disentangling Uncertainty in Machine Translation Evaluation (Zerva et al., EMNLP 2022)
- PDF:
- https://preview.aclanthology.org/nschneid-patch-1/2022.emnlp-main.591.pdf