Shuaiyi Nie
2025
Revealing and Mitigating the Challenge of Detecting Character Knowledge Errors in LLM Role-Playing
Wenyuan Zhang
|
Shuaiyi Nie
|
Jiawei Sheng
|
Zefeng Zhang
|
Xinghua Zhang
|
Yongquan He
|
Tingwen Liu
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Large language model (LLM) role-playing has gained widespread attention. Authentic character knowledge is crucial for constructing realistic LLM role-playing agents. However, existing works usually overlook the exploration of LLMs’ ability to detect characters’ known knowledge errors (KKE) and unknown knowledge errors (UKE) while playing roles, which would lead to low-quality automatic construction of character trainable corpus. In this paper, we propose RoleKE-Bench to evaluate LLMs’ ability to detect errors in KKE and UKE. The results indicate that even the latest LLMs struggle to detect these two types of errors effectively, especially when it comes to familiar knowledge. We experimented with various reasoning strategies and propose an agent-based reasoning method, Self-Recollection and Self-Doubt (S2RD), to explore further the potential for improving error detection capabilities.
Search
Fix author
Co-authors
- Yongquan He 1
- Tingwen Liu (柳厅文) 1
- Jiawei Sheng 1
- Wenyuan Zhang 1
- Zefeng Zhang 1
- show all...