Studying Summarization Evaluation Metrics in the Appropriate Scoring Range

Maxime Peyrard


Abstract
In summarization, automatic evaluation metrics are usually compared based on their ability to correlate with human judgments. Unfortunately, the few existing human judgment datasets have been created as by-products of the manual evaluations performed during the DUC/TAC shared tasks. However, modern systems are typically better than the best systems submitted at the time of these shared tasks. We show that, surprisingly, evaluation metrics which behave similarly on these datasets (average-scoring range) strongly disagree in the higher-scoring range in which current systems now operate. It is problematic because metrics disagree yet we can’t decide which one to trust. This is a call for collecting human judgments for high-scoring summaries as this would resolve the debate over which metrics to trust. This would also be greatly beneficial to further improve summarization systems and metrics alike.
Anthology ID:
P19-1502
Volume:
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics
Month:
July
Year:
2019
Address:
Florence, Italy
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
5093–5100
Language:
URL:
https://aclanthology.org/P19-1502
DOI:
10.18653/v1/P19-1502
Bibkey:
Cite (ACL):
Maxime Peyrard. 2019. Studying Summarization Evaluation Metrics in the Appropriate Scoring Range. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5093–5100, Florence, Italy. Association for Computational Linguistics.
Cite (Informal):
Studying Summarization Evaluation Metrics in the Appropriate Scoring Range (Peyrard, ACL 2019)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingestion-script-update/P19-1502.pdf
Video:
 https://vimeo.com/385273276
Code
 PeyrardM/acl-2019-Compare_Evaluation_Metrics