Training and Meta-Evaluating Machine Translation Evaluation Metrics at the Paragraph Level

Daniel Deutsch, Juraj Juraska, Mara Finkelstein, Markus Freitag


Abstract
As research on machine translation moves to translating text beyond the sentence level, it remains unclear how effective automatic evaluation metrics are at scoring longer translations. In this work, we first propose a method for creating paragraph-level data for training and meta-evaluating metrics from existing sentence-level data. Then, we use these new datasets to benchmark existing sentence-level metrics as well as train learned metrics at the paragraph level. Interestingly, our experimental results demonstrate that using sentence-level metrics to score entire paragraphs is equally as effective as using a metric designed to work at the paragraph level. We speculate this result can be attributed to properties of the task of reference-based evaluation as well as limitations of our datasets with respect to capturing all types of phenomena that occur in paragraph-level translations.
Anthology ID:
2023.wmt-1.96
Volume:
Proceedings of the Eighth Conference on Machine Translation
Month:
December
Year:
2023
Address:
Singapore
Editors:
Philipp Koehn, Barry Haddow, Tom Kocmi, Christof Monz
Venue:
WMT
SIG:
SIGMT
Publisher:
Association for Computational Linguistics
Note:
Pages:
996–1013
Language:
URL:
https://aclanthology.org/2023.wmt-1.96
DOI:
10.18653/v1/2023.wmt-1.96
Bibkey:
Cite (ACL):
Daniel Deutsch, Juraj Juraska, Mara Finkelstein, and Markus Freitag. 2023. Training and Meta-Evaluating Machine Translation Evaluation Metrics at the Paragraph Level. In Proceedings of the Eighth Conference on Machine Translation, pages 996–1013, Singapore. Association for Computational Linguistics.
Cite (Informal):
Training and Meta-Evaluating Machine Translation Evaluation Metrics at the Paragraph Level (Deutsch et al., WMT 2023)
Copy Citation:
PDF:
https://preview.aclanthology.org/nschneid-patch-5/2023.wmt-1.96.pdf