Context-aware Natural Language Generation for Spoken Dialogue Systems

Hao Zhou, Minlie Huang, Xiaoyan Zhu


Abstract
Natural language generation (NLG) is an important component of question answering(QA) systems which has a significant impact on system quality. Most tranditional QA systems based on templates or rules tend to generate rigid and stylised responses without the natural variation of human language. Furthermore, such methods need an amount of work to generate the templates or rules. To address this problem, we propose a Context-Aware LSTM model for NLG. The model is completely driven by data without manual designed templates or rules. In addition, the context information, including the question to be answered, semantic values to be addressed in the response, and the dialogue act type during interaction, are well approached in the neural network model, which enables the model to produce variant and informative responses. The quantitative evaluation and human evaluation show that CA-LSTM obtains state-of-the-art performance.
Anthology ID:
C16-1191
Volume:
Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers
Month:
December
Year:
2016
Address:
Osaka, Japan
Editors:
Yuji Matsumoto, Rashmi Prasad
Venue:
COLING
SIG:
Publisher:
The COLING 2016 Organizing Committee
Note:
Pages:
2032–2041
Language:
URL:
https://aclanthology.org/C16-1191
DOI:
Bibkey:
Cite (ACL):
Hao Zhou, Minlie Huang, and Xiaoyan Zhu. 2016. Context-aware Natural Language Generation for Spoken Dialogue Systems. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers, pages 2032–2041, Osaka, Japan. The COLING 2016 Organizing Committee.
Cite (Informal):
Context-aware Natural Language Generation for Spoken Dialogue Systems (Zhou et al., COLING 2016)
Copy Citation:
PDF:
https://preview.aclanthology.org/landing_page/C16-1191.pdf