Curriculum Prompt Learning with Self-Training for Abstractive Dialogue Summarization

Changqun Li, Linlin Wang, Xin Lin, Gerard de Melo, Liang He


Abstract
Succinctly summarizing dialogue is a task of growing interest, but inherent challenges, such as insufficient training data and low information density impede our ability to train abstractive models. In this work, we propose a novel curriculum-based prompt learning method with self-training to address these problems. Specifically, prompts are learned using a curriculum learning strategy that gradually increases the degree of prompt perturbation, thereby improving the dialogue understanding and modeling capabilities of our model. Unlabeled dialogue is incorporated by means of self-training so as to reduce the dependency on labeled data. We further investigate topic-aware prompts to better plan for the generation of summaries. Experiments confirm that our model substantially outperforms strong baselines and achieves new state-of-the-art results on the AMI and ICSI datasets. Human evaluations also show the superiority of our model with regard to the summary generation quality.
Anthology ID:
2022.emnlp-main.72
Volume:
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
Month:
December
Year:
2022
Address:
Abu Dhabi, United Arab Emirates
Editors:
Yoav Goldberg, Zornitsa Kozareva, Yue Zhang
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1096–1106
Language:
URL:
https://aclanthology.org/2022.emnlp-main.72
DOI:
10.18653/v1/2022.emnlp-main.72
Bibkey:
Cite (ACL):
Changqun Li, Linlin Wang, Xin Lin, Gerard de Melo, and Liang He. 2022. Curriculum Prompt Learning with Self-Training for Abstractive Dialogue Summarization. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 1096–1106, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
Cite (Informal):
Curriculum Prompt Learning with Self-Training for Abstractive Dialogue Summarization (Li et al., EMNLP 2022)
Copy Citation:
PDF:
https://preview.aclanthology.org/add_acl24_videos/2022.emnlp-main.72.pdf