A Hierarchical Neural Model for Learning Sequences of Dialogue Acts

Quan Hung Tran, Ingrid Zukerman, Gholamreza Haffari


Abstract
We propose a novel hierarchical Recurrent Neural Network (RNN) for learning sequences of Dialogue Acts (DAs). The input in this task is a sequence of utterances (i.e., conversational contributions) comprising a sequence of tokens, and the output is a sequence of DA labels (one label per utterance). Our model leverages the hierarchical nature of dialogue data by using two nested RNNs that capture long-range dependencies at the dialogue level and the utterance level. This model is combined with an attention mechanism that focuses on salient tokens in utterances. Our experimental results show that our model outperforms strong baselines on two popular datasets, Switchboard and MapTask; and our detailed empirical analysis highlights the impact of each aspect of our model.
Anthology ID:
E17-1041
Volume:
Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers
Month:
April
Year:
2017
Address:
Valencia, Spain
Editors:
Mirella Lapata, Phil Blunsom, Alexander Koller
Venue:
EACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
428–437
Language:
URL:
https://aclanthology.org/E17-1041
DOI:
Bibkey:
Cite (ACL):
Quan Hung Tran, Ingrid Zukerman, and Gholamreza Haffari. 2017. A Hierarchical Neural Model for Learning Sequences of Dialogue Acts. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers, pages 428–437, Valencia, Spain. Association for Computational Linguistics.
Cite (Informal):
A Hierarchical Neural Model for Learning Sequences of Dialogue Acts (Tran et al., EACL 2017)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-acl-2023-videos/E17-1041.pdf