Hierarchical Transformers for Multi-Document Summarization

Yang Liu, Mirella Lapata


Abstract
In this paper, we develop a neural summarization model which can effectively process multiple input documents and distill Transformer architecture with the ability to encode documents in a hierarchical manner. We represent cross-document relationships via an attention mechanism which allows to share information as opposed to simply concatenating text spans and processing them as a flat sequence. Our model learns latent dependencies among textual units, but can also take advantage of explicit graph representations focusing on similarity or discourse relations. Empirical results on the WikiSum dataset demonstrate that the proposed architecture brings substantial improvements over several strong baselines.
Anthology ID:
P19-1500
Volume:
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics
Month:
July
Year:
2019
Address:
Florence, Italy
Editors:
Anna Korhonen, David Traum, Lluís Màrquez
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
5070–5081
Language:
URL:
https://aclanthology.org/P19-1500
DOI:
10.18653/v1/P19-1500
Bibkey:
Cite (ACL):
Yang Liu and Mirella Lapata. 2019. Hierarchical Transformers for Multi-Document Summarization. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5070–5081, Florence, Italy. Association for Computational Linguistics.
Cite (Informal):
Hierarchical Transformers for Multi-Document Summarization (Liu & Lapata, ACL 2019)
Copy Citation:
PDF:
https://preview.aclanthology.org/fix-dup-bibkey/P19-1500.pdf
Video:
 https://preview.aclanthology.org/fix-dup-bibkey/P19-1500.mp4
Code
 nlpyang/hiersumm
Data
WikiSum