Abstract
This paper presents a novel approach for multi-task learning of language understanding (LU) and dialogue state tracking (DST) in task-oriented dialogue systems. Multi-task training enables the sharing of the neural network layers responsible for encoding the user utterance for both LU and DST and improves performance while reducing the number of network parameters. In our proposed framework, DST operates on a set of candidate values for each slot that has been mentioned so far. These candidate sets are generated using LU slot annotations for the current user utterance, dialogue acts corresponding to the preceding system utterance and the dialogue state estimated for the previous turn, enabling DST to handle slots with a large or unbounded set of possible values and deal with slot values not seen during training. Furthermore, to bridge the gap between training and inference, we investigate the use of scheduled sampling on LU output for the current user utterance as well as the DST output for the preceding turn.- Anthology ID:
- W18-5045
- Volume:
- Proceedings of the 19th Annual SIGdial Meeting on Discourse and Dialogue
- Month:
- July
- Year:
- 2018
- Address:
- Melbourne, Australia
- Editors:
- Kazunori Komatani, Diane Litman, Kai Yu, Alex Papangelis, Lawrence Cavedon, Mikio Nakano
- Venue:
- SIGDIAL
- SIG:
- SIGDIAL
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 376–384
- Language:
- URL:
- https://aclanthology.org/W18-5045
- DOI:
- 10.18653/v1/W18-5045
- Cite (ACL):
- Abhinav Rastogi, Raghav Gupta, and Dilek Hakkani-Tur. 2018. Multi-task Learning for Joint Language Understanding and Dialogue State Tracking. In Proceedings of the 19th Annual SIGdial Meeting on Discourse and Dialogue, pages 376–384, Melbourne, Australia. Association for Computational Linguistics.
- Cite (Informal):
- Multi-task Learning for Joint Language Understanding and Dialogue State Tracking (Rastogi et al., SIGDIAL 2018)
- PDF:
- https://preview.aclanthology.org/nschneid-patch-4/W18-5045.pdf