Not All Errors are Equal: Learning Text Generation Metrics using Stratified Error Synthesis

Wenda Xu, Yi-Lin Tuan, Yujie Lu, Michael Saxon, Lei Li, William Yang Wang


Abstract
Is it possible to build a general and automatic natural language generation (NLG) evaluation metric? Existing learned metrics either perform unsatisfactorily or are restricted to tasks where large human rating data is already available. We introduce SESCORE, a model-based metric that is highly correlated with human judgements without requiring human annotation, by utilizing a novel, iterative error synthesis and severity scoring pipeline. This pipeline applies a series of plausible errors to raw text and assigns severity labels by simulating human judgements with entailment. We evaluate SESCORE against existing metrics by comparing how their scores correlate with human ratings. SESCORE outperforms all prior unsupervised metrics on multiple diverse NLG tasks including machine translation, image captioning, and WebNLG text generation. For WMT 20/21En-De and Zh-En, SESCORE improve the average Kendall correlation with human judgement from 0.154 to 0.195. SESCORE even achieves comparable performance to the best supervised metric COMET, despite receiving no human annotated training data.
Anthology ID:
2022.findings-emnlp.489
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2022
Month:
December
Year:
2022
Address:
Abu Dhabi, United Arab Emirates
Editors:
Yoav Goldberg, Zornitsa Kozareva, Yue Zhang
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
6559–6574
Language:
URL:
https://aclanthology.org/2022.findings-emnlp.489
DOI:
10.18653/v1/2022.findings-emnlp.489
Bibkey:
Cite (ACL):
Wenda Xu, Yi-Lin Tuan, Yujie Lu, Michael Saxon, Lei Li, and William Yang Wang. 2022. Not All Errors are Equal: Learning Text Generation Metrics using Stratified Error Synthesis. In Findings of the Association for Computational Linguistics: EMNLP 2022, pages 6559–6574, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
Cite (Informal):
Not All Errors are Equal: Learning Text Generation Metrics using Stratified Error Synthesis (Xu et al., Findings 2022)
Copy Citation:
PDF:
https://preview.aclanthology.org/add_acl24_videos/2022.findings-emnlp.489.pdf
Video:
 https://preview.aclanthology.org/add_acl24_videos/2022.findings-emnlp.489.mp4