Abstract
Dialogue summarization models aim to generate a concise and accurate summary for multi-party dialogue. The complexity of dialogue, including coreference, dialogue acts, and inter-speaker interactions bring unique challenges to dialogue summarization. Most recent neural models achieve state-of-art performance following the pretrain-then-finetune recipe, where the large-scale language model (LLM) is pretrained on large-scale single-speaker written text, but later finetuned on multi-speaker dialogue text. To mitigate the gap between pretraining and finetuning, we propose several approaches to convert the dialogue into a third-person narrative style and show that the narration serves as a valuable annotation for LLMs. Empirical results on three benchmark datasets show our simple approach achieves higher scores on the ROUGE and a factual correctness metric.- Anthology ID:
- 2022.findings-emnlp.261
- Volume:
- Findings of the Association for Computational Linguistics: EMNLP 2022
- Month:
- December
- Year:
- 2022
- Address:
- Abu Dhabi, United Arab Emirates
- Editors:
- Yoav Goldberg, Zornitsa Kozareva, Yue Zhang
- Venue:
- Findings
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 3565–3575
- Language:
- URL:
- https://aclanthology.org/2022.findings-emnlp.261
- DOI:
- 10.18653/v1/2022.findings-emnlp.261
- Cite (ACL):
- Ruochen Xu, Chenguang Zhu, and Michael Zeng. 2022. Narrate Dialogues for Better Summarization. In Findings of the Association for Computational Linguistics: EMNLP 2022, pages 3565–3575, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
- Cite (Informal):
- Narrate Dialogues for Better Summarization (Xu et al., Findings 2022)
- PDF:
- https://preview.aclanthology.org/proper-vol2-ingestion/2022.findings-emnlp.261.pdf