Metrics also Disagree in the Low Scoring Range: Revisiting Summarization Evaluation Metrics

Manik Bhandari, Pranav Narayan Gour, Atabak Ashfaq, Pengfei Liu


Abstract
In text summarization, evaluating the efficacy of automatic metrics without human judgments has become recently popular. One exemplar work (Peyrard, 2019) concludes that automatic metrics strongly disagree when ranking high-scoring summaries. In this paper, we revisit their experiments and find that their observations stem from the fact that metrics disagree in ranking summaries from any narrow scoring range. We hypothesize that this may be because summaries are similar to each other in a narrow scoring range and are thus, difficult to rank. Apart from the width of the scoring range of summaries, we analyze three other properties that impact inter-metric agreement - Ease of Summarization, Abstractiveness, and Coverage.
Anthology ID:
2020.coling-main.501
Volume:
Proceedings of the 28th International Conference on Computational Linguistics
Month:
December
Year:
2020
Address:
Barcelona, Spain (Online)
Editors:
Donia Scott, Nuria Bel, Chengqing Zong
Venue:
COLING
SIG:
Publisher:
International Committee on Computational Linguistics
Note:
Pages:
5702–5711
Language:
URL:
https://aclanthology.org/2020.coling-main.501
DOI:
10.18653/v1/2020.coling-main.501
Bibkey:
Cite (ACL):
Manik Bhandari, Pranav Narayan Gour, Atabak Ashfaq, and Pengfei Liu. 2020. Metrics also Disagree in the Low Scoring Range: Revisiting Summarization Evaluation Metrics. In Proceedings of the 28th International Conference on Computational Linguistics, pages 5702–5711, Barcelona, Spain (Online). International Committee on Computational Linguistics.
Cite (Informal):
Metrics also Disagree in the Low Scoring Range: Revisiting Summarization Evaluation Metrics (Bhandari et al., COLING 2020)
Copy Citation:
PDF:
https://preview.aclanthology.org/nschneid-patch-4/2020.coling-main.501.pdf
Data
CNN/Daily Mail