Prompting for a conversation: How to control a dialog model?

Josef Valvoda, Yimai Fang, David Vandyke


Abstract
Dialog modelling faces a difficult trade-off. Models are trained on a large amount of text, yet their responses need to be limited to a desired scope and style of a dialog agent. Because the datasets used to achieve the former contain language that is not compatible with the latter, pre-trained dialog models are fine-tuned on smaller curated datasets. However, the fine-tuning process robs them of the ability to produce diverse responses, eventually reducing them to dull conversation partners. In this paper we investigate if prompting can help with mitigating the above trade-off. Specifically, we experiment with conditioning the prompt on the query, rather than training a single prompt for all queries. By following the intuition that freezing the pre-trained language model will conserve its expressivity, we find that compared to fine-tuning, prompting can achieve a higher BLEU score and substantially improve the diversity and novelty of the responses.
Anthology ID:
2022.cai-1.1
Volume:
Proceedings of the Second Workshop on When Creative AI Meets Conversational AI
Month:
October
Year:
2022
Address:
Gyeongju, Republic of Korea
Editors:
Xianchao Wu, Peiying Ruan, Sheng Li, Yi Dong
Venue:
CAI
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1–8
Language:
URL:
https://aclanthology.org/2022.cai-1.1
DOI:
Bibkey:
Cite (ACL):
Josef Valvoda, Yimai Fang, and David Vandyke. 2022. Prompting for a conversation: How to control a dialog model?. In Proceedings of the Second Workshop on When Creative AI Meets Conversational AI, pages 1–8, Gyeongju, Republic of Korea. Association for Computational Linguistics.
Cite (Informal):
Prompting for a conversation: How to control a dialog model? (Valvoda et al., CAI 2022)
Copy Citation:
PDF:
https://preview.aclanthology.org/nschneid-patch-3/2022.cai-1.1.pdf
Data
DailyDialog