Yifei Lu
2025
CogDual: Enhancing Dual Cognition of LLMs via Reinforcement Learning with Implicit Rule-Based Rewards
Cheng Liu
|
Yifei Lu
|
Fanghua Ye
|
Jian Li
|
Xingyu Chen
|
Feiliang Ren
|
Zhaopeng Tu
|
Xiaolong Li
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Role-Playing Language Agents (RPLAs) have emerged as a significant application direction for Large Language Models (LLMs). Existing approaches typically rely on prompt engineering or supervised fine-tuning to enable models to imitate character behaviors in specific scenarios, but often neglect the underlying cognitive mechanisms driving these behaviors. Inspired by cognitive psychology, we introduce CogDual, a novel RPLA adopting a cognize-then-respond reasoning paradigm. By jointly modeling external situational awareness and internal self-awareness, CogDual generates responses with improved character consistency and contextual alignment. To further optimize the performance, we employ reinforcement learning with two general-purpose reward schemes designed for open-domain text generation. Extensive experiments on the CoSER benchmark, as well as Cross-MR and LifeChoice, demonstrate that CogDual consistently outperforms existing baselines and generalizes effectively across diverse role-playing tasks.
Search
Fix author
Co-authors
- Xingyu Chen 1
- Jian Li 1
- Xiaolong Li (李小龙) 1
- Cheng Liu 1
- Feiliang Ren 1
- show all...