Qianyun Du
2025
Should I Believe in What Medical AI Says? A Chinese Benchmark for Medication Based on Knowledge and Reasoning
Yue Wu
|
Yangmin Huang
|
Qianyun Du
|
Lixian Lai
|
Zhiyang He
|
Jiaxue Hu
|
Xiaodong Tao
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)
Large language models (LLMs) show potential in healthcare but often generate hallucinations, especially when handling unfamiliar information. In medication, a systematic benchmark to evaluate model capabilities is lacking, which is critical given the high-risk nature of medical information. This paper introduces a Chinese benchmark aimed at assessing models in medication tasks, focusing on knowledge and reasoning across six datasets: indication, dosage and administration, contraindicated population, mechanisms of action, drug recommendation, and drug interaction. We evaluate eight closed-source and five open-source models to identify knowledge boundaries, providing the first systematic analysis of limitations and risks in proprietary medical models.
Chain of Strategy Optimization Makes Large Language Models Better Emotional Supporter
Weixiang Zhao
|
Xingyu Sui
|
Xinyang Han
|
Yang Deng
|
Yulin Hu
|
Jiahe Guo
|
Libo Qin
|
Qianyun Du
|
Shijin Wang
|
Yanyan Zhao
|
Bing Qin
|
Ting Liu
Findings of the Association for Computational Linguistics: EMNLP 2025
The growing emotional stress in modern society has increased the demand for Emotional Support Conversations (ESC). While Large Language Models (LLMs) show promise for ESC, they face two key challenges: (1) low strategy selection accuracy, and (2) preference bias, limiting their adaptability to users’ emotional needs. Existing supervised fine-tuning (SFT) struggles to address these issues, as it rigidly trains models on single gold-standard responses without modeling nuanced strategy trade-offs. To overcome these limitations, we propose a novel two-stage framework that optimizes strategy selection preferences at each dialogue turn. We first leverage Monte Carlo Tree Search to construct ESC-Pro, a high-quality preference dataset with turn-level strategy-response pairs. Then training on ESC-Pro with Chain-of-Strategy Optimization (CSO) improves both strategy accuracy and bias mitigation, enabling LLMs to generate more empathetic and contextually appropriate responses. Experiments on LLaMA-3.1-8B, Gemma-2-9B, and Qwen2.5-7B demonstrate that CSO outperforms standard SFT, highlighting the efficacy of fine-grained, turn-level preference modeling in ESC.