Reward-Balancing for Statistical Spoken Dialogue Systems using Multi-objective Reinforcement Learning

Stefan Ultes, Paweł Budzianowski, Iñigo Casanueva, Nikola Mrkšić, Lina M. Rojas-Barahona, Pei-Hao Su, Tsung-Hsien Wen, Milica Gašić, Steve Young


Abstract
Reinforcement learning is widely used for dialogue policy optimization where the reward function often consists of more than one component, e.g., the dialogue success and the dialogue length. In this work, we propose a structured method for finding a good balance between these components by searching for the optimal reward component weighting. To render this search feasible, we use multi-objective reinforcement learning to significantly reduce the number of training dialogues required. We apply our proposed method to find optimized component weights for six domains and compare them to a default baseline.
Anthology ID:
W17-5509
Volume:
Proceedings of the 18th Annual SIGdial Meeting on Discourse and Dialogue
Month:
August
Year:
2017
Address:
Saarbrücken, Germany
Venue:
SIGDIAL
SIG:
SIGDIAL
Publisher:
Association for Computational Linguistics
Note:
Pages:
65–70
Language:
URL:
https://aclanthology.org/W17-5509
DOI:
10.18653/v1/W17-5509
Bibkey:
Cite (ACL):
Stefan Ultes, Paweł Budzianowski, Iñigo Casanueva, Nikola Mrkšić, Lina M. Rojas-Barahona, Pei-Hao Su, Tsung-Hsien Wen, Milica Gašić, and Steve Young. 2017. Reward-Balancing for Statistical Spoken Dialogue Systems using Multi-objective Reinforcement Learning. In Proceedings of the 18th Annual SIGdial Meeting on Discourse and Dialogue, pages 65–70, Saarbrücken, Germany. Association for Computational Linguistics.
Cite (Informal):
Reward-Balancing for Statistical Spoken Dialogue Systems using Multi-objective Reinforcement Learning (Ultes et al., SIGDIAL 2017)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingestion-script-update/W17-5509.pdf