SMRT Chatbots: Improving Non-Task-Oriented Dialog with Simulated Multiple Reference Training

Huda Khayrallah, João Sedoc


Abstract
Non-task-oriented dialog models suffer from poor quality and non-diverse responses. To overcome limited conversational data, we apply Simulated Multiple Reference Training (SMRT; Khayrallah et al., 2020), and use a paraphraser to simulate multiple responses per training prompt. We find SMRT improves over a strong Transformer baseline as measured by human and automatic quality scores and lexical diversity. We also find SMRT is comparable to pretraining in human evaluation quality, and outperforms pretraining on automatic quality and lexical diversity, without requiring related-domain dialog data.
Anthology ID:
2020.findings-emnlp.403
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2020
Month:
November
Year:
2020
Address:
Online
Editors:
Trevor Cohn, Yulan He, Yang Liu
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
4489–4505
Language:
URL:
https://preview.aclanthology.org/build-pipeline-with-new-library/2020.findings-emnlp.403/
DOI:
10.18653/v1/2020.findings-emnlp.403
Bibkey:
Cite (ACL):
Huda Khayrallah and João Sedoc. 2020. SMRT Chatbots: Improving Non-Task-Oriented Dialog with Simulated Multiple Reference Training. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 4489–4505, Online. Association for Computational Linguistics.
Cite (Informal):
SMRT Chatbots: Improving Non-Task-Oriented Dialog with Simulated Multiple Reference Training (Khayrallah & Sedoc, Findings 2020)
Copy Citation:
PDF:
https://preview.aclanthology.org/build-pipeline-with-new-library/2020.findings-emnlp.403.pdf
Video:
 https://slideslive.com/38940711
Data
DailyDialogOpenSubtitlesParaBank