An Exploratory Study on Long Dialogue Summarization: What Works and What’s Next

Yusen Zhang, Ansong Ni, Tao Yu, Rui Zhang, Chenguang Zhu, Budhaditya Deb, Asli Celikyilmaz, Ahmed Hassan Awadallah, Dragomir Radev


Abstract
Dialogue summarization helps readers capture salient information from long conversations in meetings, interviews, and TV series. However, real-world dialogues pose a great challenge to current summarization models, as the dialogue length typically exceeds the input limits imposed by recent transformer-based pre-trained models, and the interactive nature of dialogues makes relevant information more context-dependent and sparsely distributed than news articles. In this work, we perform a comprehensive study on long dialogue summarization by investigating three strategies to deal with the lengthy input problem and locate relevant information: (1) extended transformer models such as Longformer, (2) retrieve-then-summarize pipeline models with several dialogue utterance retrieval methods, and (3) hierarchical dialogue encoding models such as HMNet. Our experimental results on three long dialogue datasets (QMSum, MediaSum, SummScreen) show that the retrieve-then-summarize pipeline models yield the best performance. We also demonstrate that the summary quality can be further improved with a stronger retrieval model and pretraining on proper external summarization datasets.
Anthology ID:
2021.findings-emnlp.377
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2021
Month:
November
Year:
2021
Address:
Punta Cana, Dominican Republic
Editors:
Marie-Francine Moens, Xuanjing Huang, Lucia Specia, Scott Wen-tau Yih
Venue:
Findings
SIG:
SIGDAT
Publisher:
Association for Computational Linguistics
Note:
Pages:
4426–4433
Language:
URL:
https://aclanthology.org/2021.findings-emnlp.377
DOI:
10.18653/v1/2021.findings-emnlp.377
Bibkey:
Cite (ACL):
Yusen Zhang, Ansong Ni, Tao Yu, Rui Zhang, Chenguang Zhu, Budhaditya Deb, Asli Celikyilmaz, Ahmed Hassan Awadallah, and Dragomir Radev. 2021. An Exploratory Study on Long Dialogue Summarization: What Works and What’s Next. In Findings of the Association for Computational Linguistics: EMNLP 2021, pages 4426–4433, Punta Cana, Dominican Republic. Association for Computational Linguistics.
Cite (Informal):
An Exploratory Study on Long Dialogue Summarization: What Works and What’s Next (Zhang et al., Findings 2021)
Copy Citation:
PDF:
https://preview.aclanthology.org/dois-2013-emnlp/2021.findings-emnlp.377.pdf
Video:
 https://preview.aclanthology.org/dois-2013-emnlp/2021.findings-emnlp.377.mp4
Code
 chatc/longdialsumm
Data
SAMSumSummScreen