Qingbin Liu
2021
Domain-Lifelong Learning for Dialogue State Tracking via Knowledge Preservation Networks
Qingbin Liu
|
Pengfei Cao
|
Cao Liu
|
Jiansong Chen
|
Xunliang Cai
|
Fan Yang
|
Shizhu He
|
Kang Liu
|
Jun Zhao
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing
Dialogue state tracking (DST), which estimates user goals given a dialogue context, is an essential component of task-oriented dialogue systems. Conventional DST models are usually trained offline, which requires a fixed dataset prepared in advance. This paradigm is often impractical in real-world applications since online dialogue systems usually involve continually emerging new data and domains. Therefore, this paper explores Domain-Lifelong Learning for Dialogue State Tracking (DLL-DST), which aims to continually train a DST model on new data to learn incessantly emerging new domains while avoiding catastrophically forgetting old learned domains. To this end, we propose a novel domain-lifelong learning method, called Knowledge Preservation Networks (KPN), which consists of multi-prototype enhanced retrospection and multi-strategy knowledge distillation, to solve the problems of expression diversity and combinatorial explosion in the DLL-DST task. Experimental results show that KPN effectively alleviates catastrophic forgetting and outperforms previous state-of-the-art lifelong learning methods by 4.25% and 8.27% of whole joint goal accuracy on the MultiWOZ benchmark and the SGD benchmark, respectively.
Search
Co-authors
- Pengfei Cao 1
- Cao Liu 1
- Jiansong Chen 1
- Xunliang Cai 1
- Fan Yang 1
- show all...