Automated Pyramid Summarization Evaluation

Yanjun Gao, Chen Sun, Rebecca J. Passonneau


Abstract
Pyramid evaluation was developed to assess the content of paragraph length summaries of source texts. A pyramid lists the distinct units of content found in several reference summaries, weights content units by how many reference summaries they occur in, and produces three scores based on the weighted content of new summaries. We present an automated method that is more efficient, more transparent, and more complete than previous automated pyramid methods. It is tested on a new dataset of student summaries, and historical NIST data from extractive summarizers.
Anthology ID:
K19-1038
Volume:
Proceedings of the 23rd Conference on Computational Natural Language Learning (CoNLL)
Month:
November
Year:
2019
Address:
Hong Kong, China
Editors:
Mohit Bansal, Aline Villavicencio
Venue:
CoNLL
SIG:
SIGNLL
Publisher:
Association for Computational Linguistics
Note:
Pages:
404–418
Language:
URL:
https://aclanthology.org/K19-1038
DOI:
10.18653/v1/K19-1038
Bibkey:
Cite (ACL):
Yanjun Gao, Chen Sun, and Rebecca J. Passonneau. 2019. Automated Pyramid Summarization Evaluation. In Proceedings of the 23rd Conference on Computational Natural Language Learning (CoNLL), pages 404–418, Hong Kong, China. Association for Computational Linguistics.
Cite (Informal):
Automated Pyramid Summarization Evaluation (Gao et al., CoNLL 2019)
Copy Citation:
PDF:
https://preview.aclanthology.org/nschneid-patch-3/K19-1038.pdf