Sentence Mover’s Similarity: Automatic Evaluation for Multi-Sentence Texts

Elizabeth Clark, Asli Celikyilmaz, Noah A. Smith


Abstract
For evaluating machine-generated texts, automatic methods hold the promise of avoiding collection of human judgments, which can be expensive and time-consuming. The most common automatic metrics, like BLEU and ROUGE, depend on exact word matching, an inflexible approach for measuring semantic similarity. We introduce methods based on sentence mover’s similarity; our automatic metrics evaluate text in a continuous space using word and sentence embeddings. We find that sentence-based metrics correlate with human judgments significantly better than ROUGE, both on machine-generated summaries (average length of 3.4 sentences) and human-authored essays (average length of 7.5). We also show that sentence mover’s similarity can be used as a reward when learning a generation model via reinforcement learning; we present both automatic and human evaluations of summaries learned in this way, finding that our approach outperforms ROUGE.
Anthology ID:
P19-1264
Original:
P19-1264v1
Version 2:
P19-1264v2
Volume:
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics
Month:
July
Year:
2019
Address:
Florence, Italy
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
2748–2760
Language:
URL:
https://aclanthology.org/P19-1264
DOI:
10.18653/v1/P19-1264
Bibkey:
Cite (ACL):
Elizabeth Clark, Asli Celikyilmaz, and Noah A. Smith. 2019. Sentence Mover’s Similarity: Automatic Evaluation for Multi-Sentence Texts. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2748–2760, Florence, Italy. Association for Computational Linguistics.
Cite (Informal):
Sentence Mover’s Similarity: Automatic Evaluation for Multi-Sentence Texts (Clark et al., ACL 2019)
Copy Citation:
PDF:
https://preview.aclanthology.org/emnlp-22-attachments/P19-1264.pdf
Video:
 https://vimeo.com/384736132