Speaker Sensitive Response Evaluation Model

JinYeong Bak, Alice Oh


Abstract
Automatic evaluation of open-domain dialogue response generation is very challenging because there are many appropriate responses for a given context. Existing evaluation models merely compare the generated response with the ground truth response and rate many of the appropriate responses as inappropriate if they deviate from the ground truth. One approach to resolve this problem is to consider the similarity of the generated response with the conversational context. In this paper, we propose an automatic evaluation model based on that idea and learn the model parameters from an unlabeled conversation corpus. Our approach considers the speakers in defining the different levels of similar context. We use a Twitter conversation corpus that contains many speakers and conversations to test our evaluation model. Experiments show that our model outperforms the other existing evaluation metrics in terms of high correlation with human annotation scores. We also show that our model trained on Twitter can be applied to movie dialogues without any additional training. We provide our code and the learned parameters so that they can be used for automatic evaluation of dialogue response generation models.
Anthology ID:
2020.acl-main.568
Volume:
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics
Month:
July
Year:
2020
Address:
Online
Editors:
Dan Jurafsky, Joyce Chai, Natalie Schluter, Joel Tetreault
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
6376–6385
Language:
URL:
https://aclanthology.org/2020.acl-main.568
DOI:
10.18653/v1/2020.acl-main.568
Bibkey:
Cite (ACL):
JinYeong Bak and Alice Oh. 2020. Speaker Sensitive Response Evaluation Model. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 6376–6385, Online. Association for Computational Linguistics.
Cite (Informal):
Speaker Sensitive Response Evaluation Model (Bak & Oh, ACL 2020)
Copy Citation:
PDF:
https://preview.aclanthology.org/nschneid-patch-2/2020.acl-main.568.pdf
Video:
 http://slideslive.com/38929430
Code
 NoSyu/SSREM