Generating Dialogue Responses from a Semantic Latent Space

Wei-Jen Ko, Avik Ray, Yilin Shen, Hongxia Jin


Abstract
Existing open-domain dialogue generation models are usually trained to mimic the gold response in the training set using cross-entropy loss on the vocabulary. However, a good response does not need to resemble the gold response, since there are multiple possible responses to a given prompt. In this work, we hypothesize that the current models are unable to integrate information from multiple semantically similar valid responses of a prompt, resulting in the generation of generic and uninformative responses. To address this issue, we propose an alternative to the end-to-end classification on vocabulary. We learn the pair relationship between the prompts and responses as a regression task on a latent space instead. In our novel dialog generation model, the representations of semantically related sentences are close to each other on the latent space. Human evaluation showed that learning the task on a continuous space can generate responses that are both relevant and informative.
Anthology ID:
2020.emnlp-main.352
Volume:
Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)
Month:
November
Year:
2020
Address:
Online
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
4339–4349
Language:
URL:
https://aclanthology.org/2020.emnlp-main.352
DOI:
10.18653/v1/2020.emnlp-main.352
Bibkey:
Cite (ACL):
Wei-Jen Ko, Avik Ray, Yilin Shen, and Hongxia Jin. 2020. Generating Dialogue Responses from a Semantic Latent Space. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 4339–4349, Online. Association for Computational Linguistics.
Cite (Informal):
Generating Dialogue Responses from a Semantic Latent Space (Ko et al., EMNLP 2020)
Copy Citation:
PDF:
https://preview.aclanthology.org/update-css-js/2020.emnlp-main.352.pdf
Video:
 https://slideslive.com/38939280
Data
DailyDialogPERSONA-CHAT