How to Make Context More Useful? An Empirical Study on Context-Aware Neural Conversational Models

Zhiliang Tian, Rui Yan, Lili Mou, Yiping Song, Yansong Feng, Dongyan Zhao


Abstract
Generative conversational systems are attracting increasing attention in natural language processing (NLP). Recently, researchers have noticed the importance of context information in dialog processing, and built various models to utilize context. However, there is no systematic comparison to analyze how to use context effectively. In this paper, we conduct an empirical study to compare various models and investigate the effect of context information in dialog systems. We also propose a variant that explicitly weights context vectors by context-query relevance, outperforming the other baselines.
Anthology ID:
P17-2036
Volume:
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)
Month:
July
Year:
2017
Address:
Vancouver, Canada
Editors:
Regina Barzilay, Min-Yen Kan
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
231–236
Language:
URL:
https://aclanthology.org/P17-2036
DOI:
10.18653/v1/P17-2036
Bibkey:
Cite (ACL):
Zhiliang Tian, Rui Yan, Lili Mou, Yiping Song, Yansong Feng, and Dongyan Zhao. 2017. How to Make Context More Useful? An Empirical Study on Context-Aware Neural Conversational Models. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 231–236, Vancouver, Canada. Association for Computational Linguistics.
Cite (Informal):
How to Make Context More Useful? An Empirical Study on Context-Aware Neural Conversational Models (Tian et al., ACL 2017)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-bitext-workshop/P17-2036.pdf