A Pre-training Strategy for Zero-Resource Response Selection in Knowledge-Grounded Conversations

Chongyang Tao, Changyu Chen, Jiazhan Feng, Ji-Rong Wen, Rui Yan


Abstract
Recently, many studies are emerging towards building a retrieval-based dialogue system that is able to effectively leverage background knowledge (e.g., documents) when conversing with humans. However, it is non-trivial to collect large-scale dialogues that are naturally grounded on the background documents, which hinders the effective and adequate training of knowledge selection and response matching. To overcome the challenge, we consider decomposing the training of the knowledge-grounded response selection into three tasks including: 1) query-passage matching task; 2) query-dialogue history matching task; 3) multi-turn response matching task, and joint learning all these tasks in a unified pre-trained language model. The former two tasks could help the model in knowledge selection and comprehension, while the last task is designed for matching the proper response with the given query and background knowledge (dialogue history). By this means, the model can be learned to select relevant knowledge and distinguish proper response, with the help of ad-hoc retrieval corpora and a large number of ungrounded multi-turn dialogues. Experimental results on two benchmarks of knowledge-grounded response selection indicate that our model can achieve comparable performance with several existing methods that rely on crowd-sourced data for training.
Anthology ID:
2021.acl-long.343
Volume:
Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)
Month:
August
Year:
2021
Address:
Online
Venues:
ACL | IJCNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
4446–4457
Language:
URL:
https://aclanthology.org/2021.acl-long.343
DOI:
10.18653/v1/2021.acl-long.343
Bibkey:
Cite (ACL):
Chongyang Tao, Changyu Chen, Jiazhan Feng, Ji-Rong Wen, and Rui Yan. 2021. A Pre-training Strategy for Zero-Resource Response Selection in Knowledge-Grounded Conversations. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 4446–4457, Online. Association for Computational Linguistics.
Cite (Informal):
A Pre-training Strategy for Zero-Resource Response Selection in Knowledge-Grounded Conversations (Tao et al., ACL-IJCNLP 2021)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingestion-script-update/2021.acl-long.343.pdf
Optional supplementary material:
 2021.acl-long.343.OptionalSupplementaryMaterial.pdf
Video:
 https://preview.aclanthology.org/ingestion-script-update/2021.acl-long.343.mp4
Data
CMU DoGWizard of Wikipedia