Prefix-Enhanced Large Language Models with Reused Training Data in Multi-Turn Medical Dialogue
Suxue Ma, Zhicheng Yang, Ruei-Sung Lin, Youbao Tang, Ning Zhang, Zhenjie Cao, Yuan Ni, Jing Xiao, Jieke Hou, Peng Chang
Abstract
Large Language Models have made impressive progress in the medical field. In medical dialogue scenarios, unlike traditional single-turn question-answering tasks, multi-turn doctor-patient dialogue tasks require AI doctors to interact with patients in multiple rounds, where the quality of each response impacts the overall model performance. In this paper, we propose PERT to re-explore values of multi-turn dialogue training data after the supervised fine-tuning phase by integrating a prefix learning strategy, further enhancing the response quality. Our preliminary results show that PERT achieves notable improvements on gynecological data, with an increase of up to 0.22 on a 5-point rating scale.- Anthology ID:
- 2025.cl4health-1.3
- Volume:
- Proceedings of the Second Workshop on Patient-Oriented Language Processing (CL4Health)
- Month:
- May
- Year:
- 2025
- Address:
- Albuquerque, New Mexico
- Editors:
- Sophia Ananiadou, Dina Demner-Fushman, Deepak Gupta, Paul Thompson
- Venues:
- CL4Health | WS
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 26–33
- Language:
- URL:
- https://preview.aclanthology.org/fix-sig-urls/2025.cl4health-1.3/
- DOI:
- Cite (ACL):
- Suxue Ma, Zhicheng Yang, Ruei-Sung Lin, Youbao Tang, Ning Zhang, Zhenjie Cao, Yuan Ni, Jing Xiao, Jieke Hou, and Peng Chang. 2025. Prefix-Enhanced Large Language Models with Reused Training Data in Multi-Turn Medical Dialogue. In Proceedings of the Second Workshop on Patient-Oriented Language Processing (CL4Health), pages 26–33, Albuquerque, New Mexico. Association for Computational Linguistics.
- Cite (Informal):
- Prefix-Enhanced Large Language Models with Reused Training Data in Multi-Turn Medical Dialogue (Ma et al., CL4Health 2025)
- PDF:
- https://preview.aclanthology.org/fix-sig-urls/2025.cl4health-1.3.pdf