Abstract
Generating semantically coherent responses is still a major challenge in dialogue generation. Different from conventional text generation tasks, the mapping between inputs and responses in conversations is more complicated, which highly demands the understanding of utterance-level semantic dependency, a relation between the whole meanings of inputs and outputs. To address this problem, we propose an Auto-Encoder Matching (AEM) model to learn such dependency. The model contains two auto-encoders and one mapping module. The auto-encoders learn the semantic representations of inputs and responses, and the mapping module learns to connect the utterance-level representations. Experimental results from automatic and human evaluations demonstrate that our model is capable of generating responses of high coherence and fluency compared to baseline models.- Anthology ID:
- D18-1075
- Volume:
- Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing
- Month:
- October-November
- Year:
- 2018
- Address:
- Brussels, Belgium
- Venue:
- EMNLP
- SIG:
- SIGDAT
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 702–707
- Language:
- URL:
- https://aclanthology.org/D18-1075
- DOI:
- 10.18653/v1/D18-1075
- Cite (ACL):
- Liangchen Luo, Jingjing Xu, Junyang Lin, Qi Zeng, and Xu Sun. 2018. An Auto-Encoder Matching Model for Learning Utterance-Level Semantic Dependency in Dialogue Generation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 702–707, Brussels, Belgium. Association for Computational Linguistics.
- Cite (Informal):
- An Auto-Encoder Matching Model for Learning Utterance-Level Semantic Dependency in Dialogue Generation (Luo et al., EMNLP 2018)
- PDF:
- https://preview.aclanthology.org/paclic-22-ingestion/D18-1075.pdf
- Code
- lancopku/AMM
- Data
- DailyDialog