2025
pdf
bib
abs
Can You Share Your Story? Modeling Clients’ Metacognition and Openness for LLM Therapist Evaluation
Minju Kim
|
Dongje Yoo
|
Yeonjun Hwang
|
Minseok Kang
|
Namyoung Kim
|
Minju Gwak
|
Beong-woo Kwak
|
Hyungjoo Chae
|
Harim Kim
|
Yunjoong Lee
|
Min Hee Kim
|
Dayi Jung
|
Kyong-Mee Chung
|
Jinyoung Yeo
Findings of the Association for Computational Linguistics: ACL 2025
Understanding clients’ thoughts and beliefs is fundamental in counseling, yet current evaluations of LLM therapists often fail to assess this ability. Existing evaluation methods rely on client simulators that clearly disclose internal states to the therapist, making it difficult to determine whether an LLM therapist can uncover unexpressed perspectives. To address this limitation, we introduce MindVoyager, a novel evaluation framework featuring a controllable and realistic client simulator which dynamically adapts itself based on the ongoing counseling session, offering a more realistic and challenging evaluation environment. We further introduce evaluation metrics that assess the exploration ability of LLM therapists by measuring their thorough understanding of client’s beliefs and thoughts.
pdf
bib
abs
PRINCIPLES: Synthetic Strategy Memory for Proactive Dialogue Agents
Namyoung Kim
|
Kai Tzu-iunn Ong
|
Yeonjun Hwang
|
Minseok Kang
|
Iiseo Jihn
|
Gayoung Kim
|
Minju Kim
|
Jinyoung Yeo
Findings of the Association for Computational Linguistics: EMNLP 2025
Dialogue agents based on large language models (LLMs) have shown promising performance in proactive dialogue, which requires effective strategy planning. However, existing approaches to strategy planning for proactive dialogue face several limitations: limited strategy coverage, preference bias in planning, and reliance on costly additional training. To address these, we propose PRINCIPLES: a synthetic strategy memory for proactive dialogue agents. PRINCIPLES is derived through offline self-play simulations and serves as reusable knowledge that guides strategy planning during inference, eliminating the need for additional training and data annotation. We evaluate PRINCIPLES in both emotional support and persuasion domains, demonstrating consistent improvements over strong baselines. Furthermore, PRINCIPLES maintains its robustness across extended and more diverse evaluation settings. See our project page at https://huggingface.co/spaces/kimnamssya/Principles.
pdf
bib
abs
Towards Lifelong Dialogue Agents via Timeline-based Memory Management
Kai Tzu-iunn Ong
|
Namyoung Kim
|
Minju Gwak
|
Hyungjoo Chae
|
Taeyoon Kwon
|
Yohan Jo
|
Seung-won Hwang
|
Dongha Lee
|
Jinyoung Yeo
Proceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)
To achieve lifelong human-agent interaction, dialogue agents need to constantly memorize perceived information and properly retrieve it for response generation (RG). While prior studies focus on getting rid of outdated memories to improve retrieval quality, we argue that such memories provide rich, important contextual cues for RG (e.g., changes in user behaviors) in long-term conversations. We present THEANINE, a framework for LLM-based lifelong dialogue agents. THEANINE discards memory removal and manages large-scale memories by linking them based on their temporal and cause-effect relation. Enabled by this linking structure, THEANINE augments RG with memory timelines - series of memories representing the evolution or causality of relevant past events. Along with THEANINE, we introduce TeaFarm, a counterfactual-driven evaluation scheme, addressing the limitation of G-Eval and human efforts when assessing agent performance in integrating past memories into RG. A supplementary video for THEANINE and data for TeaFarm are at https://huggingface.co/spaces/ResearcherScholar/Theanine.