Agent-Aware Dropout DQN for Safe and Efficient On-line Dialogue Policy Learning

Lu Chen, Xiang Zhou, Cheng Chang, Runzhe Yang, Kai Yu


Abstract
Hand-crafted rules and reinforcement learning (RL) are two popular choices to obtain dialogue policy. The rule-based policy is often reliable within predefined scope but not self-adaptable, whereas RL is evolvable with data but often suffers from a bad initial performance. We employ a companion learning framework to integrate the two approaches for on-line dialogue policy learning, in which a pre-defined rule-based policy acts as a “teacher” and guides a data-driven RL system by giving example actions as well as additional rewards. A novel agent-aware dropout Deep Q-Network (AAD-DQN) is proposed to address the problem of when to consult the teacher and how to learn from the teacher’s experiences. AAD-DQN, as a data-driven student policy, provides (1) two separate experience memories for student and teacher, (2) an uncertainty estimated by dropout to control the timing of consultation and learning. Simulation experiments showed that the proposed approach can significantly improve both safetyand efficiency of on-line policy optimization compared to other companion learning approaches as well as supervised pre-training using static dialogue corpus.
Anthology ID:
D17-1260
Volume:
Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing
Month:
September
Year:
2017
Address:
Copenhagen, Denmark
Editors:
Martha Palmer, Rebecca Hwa, Sebastian Riedel
Venue:
EMNLP
SIG:
SIGDAT
Publisher:
Association for Computational Linguistics
Note:
Pages:
2454–2464
Language:
URL:
https://aclanthology.org/D17-1260
DOI:
10.18653/v1/D17-1260
Bibkey:
Cite (ACL):
Lu Chen, Xiang Zhou, Cheng Chang, Runzhe Yang, and Kai Yu. 2017. Agent-Aware Dropout DQN for Safe and Efficient On-line Dialogue Policy Learning. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2454–2464, Copenhagen, Denmark. Association for Computational Linguistics.
Cite (Informal):
Agent-Aware Dropout DQN for Safe and Efficient On-line Dialogue Policy Learning (Chen et al., EMNLP 2017)
Copy Citation:
PDF:
https://preview.aclanthology.org/dois-2013-emnlp/D17-1260.pdf
Attachment:
 D17-1260.Attachment.zip
Video:
 https://preview.aclanthology.org/dois-2013-emnlp/D17-1260.mp4