Abstract
Neural summarization produces outputs that are fluent and readable, but which can be poor at content selection, for instance often copying full sentences from the source document. This work explores the use of data-efficient content selectors to over-determine phrases in a source document that should be part of the summary. We use this selector as a bottom-up attention step to constrain the model to likely phrases. We show that this approach improves the ability to compress text, while still generating fluent summaries. This two-step process is both simpler and higher performing than other end-to-end content selection models, leading to significant improvements on ROUGE for both the CNN-DM and NYT corpus. Furthermore, the content selector can be trained with as little as 1,000 sentences making it easy to transfer a trained summarizer to a new domain.- Anthology ID:
- D18-1443
- Volume:
- Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing
- Month:
- October-November
- Year:
- 2018
- Address:
- Brussels, Belgium
- Editors:
- Ellen Riloff, David Chiang, Julia Hockenmaier, Jun’ichi Tsujii
- Venue:
- EMNLP
- SIG:
- SIGDAT
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 4098–4109
- Language:
- URL:
- https://preview.aclanthology.org/build-pipeline-with-new-library/D18-1443/
- DOI:
- 10.18653/v1/D18-1443
- Cite (ACL):
- Sebastian Gehrmann, Yuntian Deng, and Alexander Rush. 2018. Bottom-Up Abstractive Summarization. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4098–4109, Brussels, Belgium. Association for Computational Linguistics.
- Cite (Informal):
- Bottom-Up Abstractive Summarization (Gehrmann et al., EMNLP 2018)
- PDF:
- https://preview.aclanthology.org/build-pipeline-with-new-library/D18-1443.pdf
- Code
- sebastianGehrmann/bottom-up-summary + additional community code
- Data
- CNN/Daily Mail, Multi-News, New York Times Annotated Corpus