Ningyuan Xi
2025
Dream to Chat: Model-based Reinforcement Learning on Dialogues with User Belief Modeling
Yue Zhao
|
Xiaoyu Wang
|
Dan Wang
|
Zhonglin Jiang
|
Qingqing Gu
|
Teng Chen
|
Ningyuan Xi
|
Jinxian Qu
|
Yong Chen
|
Luo Ji
Findings of the Association for Computational Linguistics: EMNLP 2025
World models have been widely utilized in robotics, gaming, and autonomous driving. However, their applications to natural language tasks are relatively limited. In this paper, we construct the dialogue world model, which could predict future utterances and user beliefs, including emotion, sentiment, and intention. In this paper, we propose a framework called DreamCUB, which shows that this user belief modeling and the entire dialogue world model can be established by LLM post-training. By defining a POMDP, we apply model-based reinforcement learning to the dialogue system and solve it by maximizing the information bottleneck. Experiments show that the pretrained dialogue world model can achieve state-of-the-art performances on emotion classification and sentiment identification, while dialogue quality is also enhanced by joint training of policy, critic and dialogue world model. Further analysis reveals that DreamCUB holds a reasonable exploration-exploitation balance and also transfers well to out-of-domain scenarios such as empathetic dialogues.
Search
Fix author
Co-authors
- Teng Chen 1
- Yong Chen 1
- Qingqing Gu 1
- Luo Ji 1
- Zhonglin Jiang 1
- show all...