Abstract
Classic pipeline models for task-oriented dialogue system require explicit modeling the dialogue states and hand-crafted action spaces to query a domain-specific knowledge base. Conversely, sequence-to-sequence models learn to map dialogue history to the response in current turn without explicit knowledge base querying. In this work, we propose a novel framework that leverages the advantages of classic pipeline and sequence-to-sequence models. Our framework models a dialogue state as a fixed-size distributed representation and use this representation to query a knowledge base via an attention mechanism. Experiment on Stanford Multi-turn Multi-domain Task-oriented Dialogue Dataset shows that our framework significantly outperforms other sequence-to-sequence based baseline models on both automatic and human evaluation.- Anthology ID:
- C18-1320
- Volume:
- Proceedings of the 27th International Conference on Computational Linguistics
- Month:
- August
- Year:
- 2018
- Address:
- Santa Fe, New Mexico, USA
- Venue:
- COLING
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 3781–3792
- Language:
- URL:
- https://aclanthology.org/C18-1320
- DOI:
- Cite (ACL):
- Haoyang Wen, Yijia Liu, Wanxiang Che, Libo Qin, and Ting Liu. 2018. Sequence-to-Sequence Learning for Task-oriented Dialogue with Dialogue State Representation. In Proceedings of the 27th International Conference on Computational Linguistics, pages 3781–3792, Santa Fe, New Mexico, USA. Association for Computational Linguistics.
- Cite (Informal):
- Sequence-to-Sequence Learning for Task-oriented Dialogue with Dialogue State Representation (Wen et al., COLING 2018)
- PDF:
- https://preview.aclanthology.org/starsem-semeval-split/C18-1320.pdf