Cascaded Attention based Unsupervised Information Distillation for Compressive Summarization

Piji Li, Wai Lam, Lidong Bing, Weiwei Guo, Hang Li


Abstract
When people recall and digest what they have read for writing summaries, the important content is more likely to attract their attention. Inspired by this observation, we propose a cascaded attention based unsupervised model to estimate the salience information from the text for compressive multi-document summarization. The attention weights are learned automatically by an unsupervised data reconstruction framework which can capture the sentence salience. By adding sparsity constraints on the number of output vectors, we can generate condensed information which can be treated as word salience. Fine-grained and coarse-grained sentence compression strategies are incorporated to produce compressive summaries. Experiments on some benchmark data sets show that our framework achieves better results than the state-of-the-art methods.
Anthology ID:
D17-1221
Volume:
Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing
Month:
September
Year:
2017
Address:
Copenhagen, Denmark
Editors:
Martha Palmer, Rebecca Hwa, Sebastian Riedel
Venue:
EMNLP
SIG:
SIGDAT
Publisher:
Association for Computational Linguistics
Note:
Pages:
2081–2090
Language:
URL:
https://aclanthology.org/D17-1221
DOI:
10.18653/v1/D17-1221
Bibkey:
Cite (ACL):
Piji Li, Wai Lam, Lidong Bing, Weiwei Guo, and Hang Li. 2017. Cascaded Attention based Unsupervised Information Distillation for Compressive Summarization. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2081–2090, Copenhagen, Denmark. Association for Computational Linguistics.
Cite (Informal):
Cascaded Attention based Unsupervised Information Distillation for Compressive Summarization (Li et al., EMNLP 2017)
Copy Citation:
PDF:
https://preview.aclanthology.org/naacl24-info/D17-1221.pdf