Abstract
We propose a novel metric for evaluating summary content coverage. The evaluation framework follows the Pyramid approach to measure how many summarization content units, considered important by human annotators, are contained in an automatic summary. Our approach automatizes the evaluation process, which does not need any manual intervention on the evaluated summary side. Our approach compares abstract meaning representations of each content unit mention and each summary sentence. We found that the proposed metric complements well the widely-used ROUGE metrics.- Anthology ID:
- R17-1090
- Volume:
- Proceedings of the International Conference Recent Advances in Natural Language Processing, RANLP 2017
- Month:
- September
- Year:
- 2017
- Address:
- Varna, Bulgaria
- Venue:
- RANLP
- SIG:
- Publisher:
- INCOMA Ltd.
- Note:
- Pages:
- 701–706
- Language:
- URL:
- https://doi.org/10.26615/978-954-452-049-6_090
- DOI:
- 10.26615/978-954-452-049-6_090
- Cite (ACL):
- Josef Steinberger, Peter Krejzl, and Tomáš Brychcín. 2017. Pyramid-based Summary Evaluation Using Abstract Meaning Representation. In Proceedings of the International Conference Recent Advances in Natural Language Processing, RANLP 2017, pages 701–706, Varna, Bulgaria. INCOMA Ltd..
- Cite (Informal):
- Pyramid-based Summary Evaluation Using Abstract Meaning Representation (Steinberger et al., RANLP 2017)
- PDF:
- https://doi.org/10.26615/978-954-452-049-6_090