BLEU Neighbors: A Reference-less Approach to Automatic Evaluation

Kawin Ethayarajh, Dorsa Sadigh


Abstract
Evaluation is a bottleneck in the development of natural language generation (NLG) models. Automatic metrics such as BLEU rely on references, but for tasks such as open-ended generation, there are no references to draw upon. Although language diversity can be estimated using statistical measures such as perplexity, measuring language quality requires human evaluation. However, because human evaluation at scale is slow and expensive, it is used sparingly; it cannot be used to rapidly iterate on NLG models, in the way BLEU is used for machine translation. To this end, we propose BLEU Neighbors, a nearest neighbors model for estimating language quality by using the BLEU score as a kernel function. On existing datasets for chitchat dialogue and open-ended sentence generation, we find that – on average – the quality estimation from a BLEU Neighbors model has a lower mean squared error and higher Spearman correlation with the ground truth than individual human annotators. Despite its simplicity, BLEU Neighbors even outperforms state-of-the-art models on automatically grading essays, including models that have access to a gold-standard reference essay.
Anthology ID:
2020.eval4nlp-1.5
Volume:
Proceedings of the First Workshop on Evaluation and Comparison of NLP Systems
Month:
November
Year:
2020
Address:
Online
Venue:
Eval4NLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
40–50
Language:
URL:
https://aclanthology.org/2020.eval4nlp-1.5
DOI:
10.18653/v1/2020.eval4nlp-1.5
Bibkey:
Cite (ACL):
Kawin Ethayarajh and Dorsa Sadigh. 2020. BLEU Neighbors: A Reference-less Approach to Automatic Evaluation. In Proceedings of the First Workshop on Evaluation and Comparison of NLP Systems, pages 40–50, Online. Association for Computational Linguistics.
Cite (Informal):
BLEU Neighbors: A Reference-less Approach to Automatic Evaluation (Ethayarajh & Sadigh, Eval4NLP 2020)
Copy Citation:
PDF:
https://preview.aclanthology.org/auto-file-uploads/2020.eval4nlp-1.5.pdf
Video:
 https://slideslive.com/38939709