Towards End-to-End Learning for Efficient Dialogue Agent by Modeling Looking-ahead Ability

Zhuoxuan Jiang, Xian-Ling Mao, Ziming Huang, Jie Ma, Shaochun Li


Abstract
Learning an efficient manager of dialogue agent from data with little manual intervention is important, especially for goal-oriented dialogues. However, existing methods either take too many manual efforts (e.g. reinforcement learning methods) or cannot guarantee the dialogue efficiency (e.g. sequence-to-sequence methods). In this paper, we address this problem by proposing a novel end-to-end learning model to train a dialogue agent that can look ahead for several future turns and generate an optimal response to make the dialogue efficient. Our method is data-driven and does not require too much manual work for intervention during system design. We evaluate our method on two datasets of different scenarios and the experimental results demonstrate the efficiency of our model.
Anthology ID:
W19-5918
Volume:
Proceedings of the 20th Annual SIGdial Meeting on Discourse and Dialogue
Month:
September
Year:
2019
Address:
Stockholm, Sweden
Editors:
Satoshi Nakamura, Milica Gasic, Ingrid Zukerman, Gabriel Skantze, Mikio Nakano, Alexandros Papangelis, Stefan Ultes, Koichiro Yoshino
Venue:
SIGDIAL
SIG:
SIGDIAL
Publisher:
Association for Computational Linguistics
Note:
Pages:
133–142
Language:
URL:
https://aclanthology.org/W19-5918
DOI:
10.18653/v1/W19-5918
Bibkey:
Cite (ACL):
Zhuoxuan Jiang, Xian-Ling Mao, Ziming Huang, Jie Ma, and Shaochun Li. 2019. Towards End-to-End Learning for Efficient Dialogue Agent by Modeling Looking-ahead Ability. In Proceedings of the 20th Annual SIGdial Meeting on Discourse and Dialogue, pages 133–142, Stockholm, Sweden. Association for Computational Linguistics.
Cite (Informal):
Towards End-to-End Learning for Efficient Dialogue Agent by Modeling Looking-ahead Ability (Jiang et al., SIGDIAL 2019)
Copy Citation:
PDF:
https://preview.aclanthology.org/add_acl24_videos/W19-5918.pdf