Investigating Data Variance in Evaluations of Automatic Machine Translation Metrics

Jiannan Xiang, Huayang Li, Yahui Liu, Lemao Liu, Guoping Huang, Defu Lian, Shuming Shi


Abstract
Current practices in metric evaluation focus on one single dataset, e.g., Newstest dataset in each year’s WMT Metrics Shared Task. However, in this paper, we qualitatively and quantitatively show that the performances of metrics are sensitive to data. The ranking of metrics varies when the evaluation is conducted on different datasets. Then this paper further investigates two potential hypotheses, i.e., insignificant data points and the deviation of i.i.d assumption, which may take responsibility for the issue of data variance. In conclusion, our findings suggest that when evaluating automatic translation metrics, researchers should take data variance into account and be cautious to report the results on unreliable datasets, because it may leads to inconsistent results with most of the other datasets.
Anthology ID:
2022.findings-acl.14
Volume:
Findings of the Association for Computational Linguistics: ACL 2022
Month:
May
Year:
2022
Address:
Dublin, Ireland
Editors:
Smaranda Muresan, Preslav Nakov, Aline Villavicencio
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
150–157
Language:
URL:
https://aclanthology.org/2022.findings-acl.14
DOI:
10.18653/v1/2022.findings-acl.14
Bibkey:
Cite (ACL):
Jiannan Xiang, Huayang Li, Yahui Liu, Lemao Liu, Guoping Huang, Defu Lian, and Shuming Shi. 2022. Investigating Data Variance in Evaluations of Automatic Machine Translation Metrics. In Findings of the Association for Computational Linguistics: ACL 2022, pages 150–157, Dublin, Ireland. Association for Computational Linguistics.
Cite (Informal):
Investigating Data Variance in Evaluations of Automatic Machine Translation Metrics (Xiang et al., Findings 2022)
Copy Citation:
PDF:
https://preview.aclanthology.org/naacl24-info/2022.findings-acl.14.pdf