Detecting (Un)Important Content for Single-Document News Summarization

Yinfei Yang, Forrest Bao, Ani Nenkova


Abstract
We present a robust approach for detecting intrinsic sentence importance in news, by training on two corpora of document-summary pairs. When used for single-document summarization, our approach, combined with the “beginning of document” heuristic, outperforms a state-of-the-art summarizer and the beginning-of-article baseline in both automatic and manual evaluations. These results represent an important advance because in the absence of cross-document repetition, single document summarizers for news have not been able to consistently outperform the strong beginning-of-article baseline.
Anthology ID:
E17-2112
Volume:
Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers
Month:
April
Year:
2017
Address:
Valencia, Spain
Editors:
Mirella Lapata, Phil Blunsom, Alexander Koller
Venue:
EACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
707–712
Language:
URL:
https://aclanthology.org/E17-2112
DOI:
Bibkey:
Cite (ACL):
Yinfei Yang, Forrest Bao, and Ani Nenkova. 2017. Detecting (Un)Important Content for Single-Document News Summarization. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers, pages 707–712, Valencia, Spain. Association for Computational Linguistics.
Cite (Informal):
Detecting (Un)Important Content for Single-Document News Summarization (Yang et al., EACL 2017)
Copy Citation:
PDF:
https://preview.aclanthology.org/emnlp-22-attachments/E17-2112.pdf