Modeling Long Context for Task-Oriented Dialogue State Generation

Jun Quan, Deyi Xiong


Abstract
Based on the recently proposed transferable dialogue state generator (TRADE) that predicts dialogue states from utterance-concatenated dialogue context, we propose a multi-task learning model with a simple yet effective utterance tagging technique and a bidirectional language model as an auxiliary task for task-oriented dialogue state generation. By enabling the model to learn a better representation of the long dialogue context, our approaches attempt to solve the problem that the performance of the baseline significantly drops when the input dialogue context sequence is long. In our experiments, our proposed model achieves a 7.03% relative improvement over the baseline, establishing a new state-of-the-art joint goal accuracy of 52.04% on the MultiWOZ 2.0 dataset.
Anthology ID:
2020.acl-main.637
Volume:
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics
Month:
July
Year:
2020
Address:
Online
Editors:
Dan Jurafsky, Joyce Chai, Natalie Schluter, Joel Tetreault
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
7119–7124
Language:
URL:
https://aclanthology.org/2020.acl-main.637
DOI:
10.18653/v1/2020.acl-main.637
Bibkey:
Cite (ACL):
Jun Quan and Deyi Xiong. 2020. Modeling Long Context for Task-Oriented Dialogue State Generation. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7119–7124, Online. Association for Computational Linguistics.
Cite (Informal):
Modeling Long Context for Task-Oriented Dialogue State Generation (Quan & Xiong, ACL 2020)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-2024-clasp/2020.acl-main.637.pdf
Video:
 http://slideslive.com/38928877
Data
MultiWOZ