Evaluating Multiple System Summary Lengths: A Case Study

Ori Shapira, David Gabay, Hadar Ronen, Judit Bar-Ilan, Yael Amsterdamer, Ani Nenkova, Ido Dagan


Abstract
Practical summarization systems are expected to produce summaries of varying lengths, per user needs. While a couple of early summarization benchmarks tested systems across multiple summary lengths, this practice was mostly abandoned due to the assumed cost of producing reference summaries of multiple lengths. In this paper, we raise the research question of whether reference summaries of a single length can be used to reliably evaluate system summaries of multiple lengths. For that, we have analyzed a couple of datasets as a case study, using several variants of the ROUGE metric that are standard in summarization evaluation. Our findings indicate that the evaluation protocol in question is indeed competitive. This result paves the way to practically evaluating varying-length summaries with simple, possibly existing, summarization benchmarks.
Anthology ID:
D18-1087
Volume:
Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing
Month:
October-November
Year:
2018
Address:
Brussels, Belgium
Venue:
EMNLP
SIG:
SIGDAT
Publisher:
Association for Computational Linguistics
Note:
Pages:
774–778
Language:
URL:
https://aclanthology.org/D18-1087
DOI:
10.18653/v1/D18-1087
Bibkey:
Cite (ACL):
Ori Shapira, David Gabay, Hadar Ronen, Judit Bar-Ilan, Yael Amsterdamer, Ani Nenkova, and Ido Dagan. 2018. Evaluating Multiple System Summary Lengths: A Case Study. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 774–778, Brussels, Belgium. Association for Computational Linguistics.
Cite (Informal):
Evaluating Multiple System Summary Lengths: A Case Study (Shapira et al., EMNLP 2018)
Copy Citation:
PDF:
https://preview.aclanthology.org/starsem-semeval-split/D18-1087.pdf