Multi-Task Learning of System Dialogue Act Selection for Supervised Pretraining of Goal-Oriented Dialogue Policies

Sarah McLeod, Ivana Kruijff-Korbayova, Bernd Kiefer


Abstract
This paper describes the use of Multi-Task Neural Networks (NNs) for system dialogue act selection. These models leverage the representations learned by the Natural Language Understanding (NLU) unit to enable robust initialization/bootstrapping of dialogue policies from medium sized initial data sets. We evaluate the models on two goal-oriented dialogue corpora in the travel booking domain. Results show the proposed models improve over models trained without knowledge of NLU tasks.
Anthology ID:
W19-5947
Volume:
Proceedings of the 20th Annual SIGdial Meeting on Discourse and Dialogue
Month:
September
Year:
2019
Address:
Stockholm, Sweden
Editors:
Satoshi Nakamura, Milica Gasic, Ingrid Zukerman, Gabriel Skantze, Mikio Nakano, Alexandros Papangelis, Stefan Ultes, Koichiro Yoshino
Venue:
SIGDIAL
SIG:
SIGDIAL
Publisher:
Association for Computational Linguistics
Note:
Pages:
411–417
Language:
URL:
https://aclanthology.org/W19-5947
DOI:
10.18653/v1/W19-5947
Bibkey:
Cite (ACL):
Sarah McLeod, Ivana Kruijff-Korbayova, and Bernd Kiefer. 2019. Multi-Task Learning of System Dialogue Act Selection for Supervised Pretraining of Goal-Oriented Dialogue Policies. In Proceedings of the 20th Annual SIGdial Meeting on Discourse and Dialogue, pages 411–417, Stockholm, Sweden. Association for Computational Linguistics.
Cite (Informal):
Multi-Task Learning of System Dialogue Act Selection for Supervised Pretraining of Goal-Oriented Dialogue Policies (McLeod et al., SIGDIAL 2019)
Copy Citation:
PDF:
https://preview.aclanthology.org/nschneid-patch-3/W19-5947.pdf