Discriminative Deep Dyna-Q: Robust Planning for Dialogue Policy Learning
Shang-Yu Su, Xiujun Li, Jianfeng Gao, Jingjing Liu, Yun-Nung Chen
Abstract
This paper presents a Discriminative Deep Dyna-Q (D3Q) approach to improving the effectiveness and robustness of Deep Dyna-Q (DDQ), a recently proposed framework that extends the Dyna-Q algorithm to integrate planning for task-completion dialogue policy learning. To obviate DDQ’s high dependency on the quality of simulated experiences, we incorporate an RNN-based discriminator in D3Q to differentiate simulated experience from real user experience in order to control the quality of training data. Experiments show that D3Q significantly outperforms DDQ by controlling the quality of simulated experience used for planning. The effectiveness and robustness of D3Q is further demonstrated in a domain extension setting, where the agent’s capability of adapting to a changing environment is tested.- Anthology ID:
- D18-1416
- Volume:
- Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing
- Month:
- October-November
- Year:
- 2018
- Address:
- Brussels, Belgium
- Editors:
- Ellen Riloff, David Chiang, Julia Hockenmaier, Jun’ichi Tsujii
- Venue:
- EMNLP
- SIG:
- SIGDAT
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 3813–3823
- Language:
- URL:
- https://aclanthology.org/D18-1416
- DOI:
- 10.18653/v1/D18-1416
- Cite (ACL):
- Shang-Yu Su, Xiujun Li, Jianfeng Gao, Jingjing Liu, and Yun-Nung Chen. 2018. Discriminative Deep Dyna-Q: Robust Planning for Dialogue Policy Learning. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3813–3823, Brussels, Belgium. Association for Computational Linguistics.
- Cite (Informal):
- Discriminative Deep Dyna-Q: Robust Planning for Dialogue Policy Learning (Su et al., EMNLP 2018)
- PDF:
- https://preview.aclanthology.org/dois-2013-emnlp/D18-1416.pdf
- Code
- MiuLab/D3Q + additional community code