Relevant and Informative Response Generation using Pointwise Mutual Information

Junya Takayama, Yuki Arase


Abstract
A sequence-to-sequence model tends to generate generic responses with little information for input utterances. To solve this problem, we propose a neural model that generates relevant and informative responses. Our model has simple architecture to enable easy application to existing neural dialogue models. Specifically, using positive pointwise mutual information, it first identifies keywords that frequently co-occur in responses given an utterance. Then, the model encourages the decoder to use the keywords for response generation. Experiment results demonstrate that our model successfully diversifies responses relative to previous models.
Anthology ID:
W19-4115
Volume:
Proceedings of the First Workshop on NLP for Conversational AI
Month:
August
Year:
2019
Address:
Florence, Italy
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
133–138
Language:
URL:
https://aclanthology.org/W19-4115
DOI:
10.18653/v1/W19-4115
Bibkey:
Cite (ACL):
Junya Takayama and Yuki Arase. 2019. Relevant and Informative Response Generation using Pointwise Mutual Information. In Proceedings of the First Workshop on NLP for Conversational AI, pages 133–138, Florence, Italy. Association for Computational Linguistics.
Cite (Informal):
Relevant and Informative Response Generation using Pointwise Mutual Information (Takayama & Arase, ACL 2019)
Copy Citation:
PDF:
https://preview.aclanthology.org/emnlp-22-attachments/W19-4115.pdf