Haiyi Zhu
2025
Adaptive-VP: A Framework for LLM-Based Virtual Patients that Adapts to Trainees’ Dialogue to Facilitate Nurse Communication Training
Keyeun Lee
|
Seolhee Lee
|
Esther Hehsun Kim
|
Yena Ko
|
Jinsu Eun
|
Dahee Kim
|
Hyewon Cho
|
Haiyi Zhu
|
Robert E. Kraut
|
Eunyoung E. Suh
|
Eun-mee Kim
|
Hajin Lim
Findings of the Association for Computational Linguistics: ACL 2025
Effective communication training is essential to preparing nurses for high-quality patient care. While standardized patient (SP) simulations provide valuable experiential learning, they are often costly and inflexible. Virtual patient (VP) systems offer a scalable alternative, but most fail to adapt to the varying communication skills of trainees. In particular, when trainees respond ineffectively, VPs should escalate in hostility or become uncooperative—yet this level of adaptive interaction remains largely unsupported. To address this gap, we introduce Adaptive-VP, a VP dialogue generation framework that leverages large language models (LLMs) to dynamically adapt VP behavior based on trainee input. The framework features a pipeline for constructing clinically grounded yet flexible VP scenarios and a modular system for assessing trainee communication and adjusting VP responses in real time, while ensuring learner safety. We validated Adaptive-VP by simulating challenging patient conversations. Automated evaluation using a corpus from practicing nurses showed that our communication skill evaluation mechanism reflected real-world proficiency levels. Expert nurses further confirmed that Adaptive-VP produced more natural and realistic interactions than existing approaches, demonstrating its potential as a scalable and effective tool for nursing communication training.
2023
Automatic Reflection Generation for Peer-to-Peer Counseling
Emma O’neil
|
João Sedoc
|
Diyi Yang
|
Haiyi Zhu
|
Lyle Ungar
Proceedings of the Third Workshop on Natural Language Generation, Evaluation, and Metrics (GEM)
Online peer counseling platforms enable conversations between millions of people seeking and offering mental health support. Among counseling skills, reflective listening, i.e., capturing and returning to the client something the client has said, is important for positive therapeutic outcomes. We introduce a reflection generation system for online mental health support conversations leveraging GPT-3, a large language model. We compare few-shot learning against fine-tuning and assess the impact of the quality of training examples as measured by fluency, reflection resemblance, and overall preference. Fine-tuned GPT-3 generates responses that human evaluators rate as comparable in reflection quality to responses used for tuning. Models based on high-quality responses generate substantially better reflections than ones tuned on actual responses from a large online counseling service–and better reflections than the actual counselor responses. These results suggest the care needed in selecting examples for tuning generative models.
Search
Fix author
Co-authors
- Hyewon Cho 1
- Jinsu Eun 1
- Esther Hehsun Kim 1
- Dahee Kim 1
- Eun-mee Kim 1
- show all...