Madelyn Mao
2022
STRUDEL: Structured Dialogue Summarization for Dialogue Comprehension
Borui Wang
|
Chengcheng Feng
|
Arjun Nair
|
Madelyn Mao
|
Jai Desai
|
Asli Celikyilmaz
|
Haoran Li
|
Yashar Mehdad
|
Dragomir Radev
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
Abstractive dialogue summarization has long been viewed as an important standalone task in natural language processing, but no previous work has explored the possibility of whether abstractive dialogue summarization can also be used as a means to boost an NLP system’s performance on other important dialogue comprehension tasks. In this paper, we propose a novel type of dialogue summarization task - STRUctured DiaLoguE Summarization (STRUDEL) - that can help pre-trained language models to better understand dialogues and improve their performance on important dialogue comprehension tasks. In contrast to the holistic approach taken by the traditional free-form abstractive summarization task for dialogues, STRUDEL aims to decompose and imitate the hierarchical, systematic and structured mental process that we human beings usually go through when understanding and analyzing dialogues, and thus has the advantage of being more focused, specific and instructive for dialogue comprehension models to learn from. We further introduce a new STRUDEL dialogue comprehension modeling framework that integrates STRUDEL into a dialogue reasoning module over transformer encoder language models to improve their dialogue comprehension ability. In our empirical experiments on two important downstream dialogue comprehension tasks - dialogue question answering and dialogue response prediction - we demonstrate that our STRUDEL dialogue comprehension models can significantly improve the dialogue comprehension performance of transformer encoder language models.
Search
Co-authors
- Borui Wang 1
- Chengcheng Feng 1
- Arjun Nair 1
- Jai Desai 1
- Asli Celikyilmaz 1
- show all...