Zhang Zhiyi
2024
CMB: A Comprehensive Medical Benchmark in Chinese
Xidong Wang
|
Guiming Chen
|
Song Dingjie
|
Zhang Zhiyi
|
Zhihong Chen
|
Qingying Xiao
|
Junying Chen
|
Feng Jiang
|
Jianquan Li
|
Xiang Wan
|
Benyou Wang
|
Haizhou Li
Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)
Large Language Models (LLMs) provide a possibility to make a great breakthrough in medicine. The establishment of a standardized medical benchmark becomes a fundamental cornerstone to measure progression. However, medical environments in different regions have their local characteristics, e.g., the ubiquity and significance of traditional Chinese medicine within China. Therefore, merely translating English-based medical evaluation may result in contextual incongruities to a local region. To solve the issue, we propose a localized medical benchmark called CMB, a Comprehensive Medical Benchmark in Chinese, designed and rooted entirely within the native Chinese linguistic and cultural framework. While traditional Chinese medicine is integral to this evaluation, it does not constitute its entirety. Using this benchmark, we have evaluated several prominent large-scale LLMs, including ChatGPT, GPT-4, dedicated Chinese LLMs, and LLMs specialized in the medical domain. We hope this benchmark provide first-hand experience in existing LLMs for medicine and also facilitate the widespread adoption and enhancement of medical LLMs within China. Our data and code are publicly available at https://github.com/FreedomIntelligence/CMB.
2023
HuatuoGPT, Towards Taming Language Model to Be a Doctor
Hongbo Zhang
|
Junying Chen
|
Feng Jiang
|
Fei Yu
|
Zhihong Chen
|
Guiming Chen
|
Jianquan Li
|
Xiangbo Wu
|
Zhang Zhiyi
|
Qingying Xiao
|
Xiang Wan
|
Benyou Wang
|
Haizhou Li
Findings of the Association for Computational Linguistics: EMNLP 2023
In this paper, we present HuatuoGPT, a Large Language Model (LLM) for medical consultation. The core recipe of HuatuoGPT is to leverage both distilled data from **ChatGPT** and real-world data from **doctors** in the supervised fine-tuning stage. This is not only because purely using **ChatGPT**-distilled data might cause ‘model collapse’, but also because real-world data from **doctors** would be complementary to **ChatGPT**-distilled data. The responses from ChatGPT are usually detailed, well-presented, fluent, and instruction-followed, but it cannot perform like a doctor in many aspects, e.g. for interactive diagnosis. Therefore, the extra doctors’ data could tame a distilled language model to perform like doctors. To synergize the strengths of both data sources, we introduce RLMF (Reinforcement Learning from Mixed Feedback) where a reward model is trained to align the language model with the merits that both sources (ChatGPT and doctors) bring. Experimental results (in GPT-4 evaluation, human evaluation, and medical benchmark datasets) demonstrate that HuatuoGPT achieves state-of-the-art results in performing medical consultation among open-source LLMs. It is worth noting that by using additional real-world data and RLMF, the distilled language model (i.e., HuatuoGPT) outperforms its teacher model (i.e., ChatGPT) in most cases.
Search
Co-authors
- Guiming Chen 2
- Zhihong Chen 2
- Qingying Xiao 2
- Junying Chen 2
- Feng Jiang (蒋峰) 2
- show all...