Improving Multiple Documents Grounded Goal-Oriented Dialog Systems via Diverse Knowledge Enhanced Pretrained Language Model

Yunah Jang, Dongryeol Lee, Hyung Joo Park, Taegwan Kang, Hwanhee Lee, Hyunkyung Bae, Kyomin Jung


Abstract
In this paper, we mainly discuss about our submission to MultiDoc2Dial task, which aims to model the goal-oriented dialogues grounded in multiple documents. The proposed task is split into grounding span prediction and agent response generation. The baseline for the task is the retrieval augmented generation model, which consists of a dense passage retrieval model for the retrieval part and the BART model for the generation part. The main challenge of this task is that the system requires a great amount of pre-trained knowledge to generate answers grounded in multiple documents. To overcome this challenge, we adopt model pretraining, fine-tuning, and multi-task learning to enhance our model’s coverage of pretrained knowledge. We experimented with various settings of our method to show the effectiveness of our approaches.
Anthology ID:
2022.dialdoc-1.15
Volume:
Proceedings of the Second DialDoc Workshop on Document-grounded Dialogue and Conversational Question Answering
Month:
May
Year:
2022
Address:
Dublin, Ireland
Editors:
Song Feng, Hui Wan, Caixia Yuan, Han Yu
Venue:
dialdoc
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
136–141
Language:
URL:
https://aclanthology.org/2022.dialdoc-1.15
DOI:
10.18653/v1/2022.dialdoc-1.15
Bibkey:
Cite (ACL):
Yunah Jang, Dongryeol Lee, Hyung Joo Park, Taegwan Kang, Hwanhee Lee, Hyunkyung Bae, and Kyomin Jung. 2022. Improving Multiple Documents Grounded Goal-Oriented Dialog Systems via Diverse Knowledge Enhanced Pretrained Language Model. In Proceedings of the Second DialDoc Workshop on Document-grounded Dialogue and Conversational Question Answering, pages 136–141, Dublin, Ireland. Association for Computational Linguistics.
Cite (Informal):
Improving Multiple Documents Grounded Goal-Oriented Dialog Systems via Diverse Knowledge Enhanced Pretrained Language Model (Jang et al., dialdoc 2022)
Copy Citation:
PDF:
https://preview.aclanthology.org/emnlp22-frontmatter/2022.dialdoc-1.15.pdf
Data
CoQAMultiDoc2Dial