Abstract
In this work, we propose an adversarial learning method for reward estimation in reinforcement learning (RL) based task-oriented dialog models. Most of the current RL based task-oriented dialog systems require the access to a reward signal from either user feedback or user ratings. Such user ratings, however, may not always be consistent or available in practice. Furthermore, online dialog policy learning with RL typically requires a large number of queries to users, suffering from sample efficiency problem. To address these challenges, we propose an adversarial learning method to learn dialog rewards directly from dialog samples. Such rewards are further used to optimize the dialog policy with policy gradient based RL. In the evaluation in a restaurant search domain, we show that the proposed adversarial dialog learning method achieves advanced dialog success rate comparing to strong baseline methods. We further discuss the covariate shift problem in online adversarial dialog learning and show how we can address that with partial access to user feedback.- Anthology ID:
- W18-5041
- Volume:
- Proceedings of the 19th Annual SIGdial Meeting on Discourse and Dialogue
- Month:
- July
- Year:
- 2018
- Address:
- Melbourne, Australia
- Editors:
- Kazunori Komatani, Diane Litman, Kai Yu, Alex Papangelis, Lawrence Cavedon, Mikio Nakano
- Venue:
- SIGDIAL
- SIG:
- SIGDIAL
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 350–359
- Language:
- URL:
- https://aclanthology.org/W18-5041
- DOI:
- 10.18653/v1/W18-5041
- Cite (ACL):
- Bing Liu and Ian Lane. 2018. Adversarial Learning of Task-Oriented Neural Dialog Models. In Proceedings of the 19th Annual SIGdial Meeting on Discourse and Dialogue, pages 350–359, Melbourne, Australia. Association for Computational Linguistics.
- Cite (Informal):
- Adversarial Learning of Task-Oriented Neural Dialog Models (Liu & Lane, SIGDIAL 2018)
- PDF:
- https://preview.aclanthology.org/nschneid-patch-1/W18-5041.pdf