Curious Case of Language Generation Evaluation Metrics: A Cautionary Tale

Ozan Caglayan, Pranava Madhyastha, Lucia Specia


Abstract
Automatic evaluation of language generation systems is a well-studied problem in Natural Language Processing. While novel metrics are proposed every year, a few popular metrics remain as the de facto metrics to evaluate tasks such as image captioning and machine translation, despite their known limitations. This is partly due to ease of use, and partly because researchers expect to see them and know how to interpret them. In this paper, we urge the community for more careful consideration of how they automatically evaluate their models by demonstrating important failure cases on multiple datasets, language pairs and tasks. Our experiments show that metrics (i) usually prefer system outputs to human-authored texts, (ii) can be insensitive to correct translations of rare words, (iii) can yield surprisingly high scores when given a single sentence as system output for the entire test set.
Anthology ID:
2020.coling-main.210
Volume:
Proceedings of the 28th International Conference on Computational Linguistics
Month:
December
Year:
2020
Address:
Barcelona, Spain (Online)
Venue:
COLING
SIG:
Publisher:
International Committee on Computational Linguistics
Note:
Pages:
2322–2328
Language:
URL:
https://aclanthology.org/2020.coling-main.210
DOI:
10.18653/v1/2020.coling-main.210
Bibkey:
Cite (ACL):
Ozan Caglayan, Pranava Madhyastha, and Lucia Specia. 2020. Curious Case of Language Generation Evaluation Metrics: A Cautionary Tale. In Proceedings of the 28th International Conference on Computational Linguistics, pages 2322–2328, Barcelona, Spain (Online). International Committee on Computational Linguistics.
Cite (Informal):
Curious Case of Language Generation Evaluation Metrics: A Cautionary Tale (Caglayan et al., COLING 2020)
Copy Citation:
PDF:
https://preview.aclanthology.org/auto-file-uploads/2020.coling-main.210.pdf