Liran Wang
2020
StyleDGPT: Stylized Response Generation with Pre-trained Language Models
Ze Yang
|
Wei Wu
|
Can Xu
|
Xinnian Liang
|
Jiaqi Bai
|
Liran Wang
|
Wei Wang
|
Zhoujun Li
Findings of the Association for Computational Linguistics: EMNLP 2020
Generating responses following a desired style has great potentials to extend applications of open-domain dialogue systems, yet is refrained by lacking of parallel data for training. In this work, we explore the challenging task with pre-trained language models that have brought breakthrough to various natural language tasks. To this end, we introduce a KL loss and a style classifier to the fine-tuning step in order to steer response generation towards the target style in both a word-level and a sentence-level. Comprehensive empirical studies with two public datasets indicate that our model can significantly outperform state-of-the-art methods in terms of both style consistency and contextual coherence.