Yifei Chen
2025
EMRs2CSP : Mining Clinical Status Pathway from Electronic Medical Records
Yifei Chen
|
Ruihui Hou
|
Jingping Liu
|
Tong Ruan
Findings of the Association for Computational Linguistics: ACL 2025
Many current studies focus on extracting tests or treatments when constructing clinical pathways, often neglecting the patient’s symptoms and diagnosis, leading to incomplete diagnostic and therapeutic logic. Therefore, this paper aims to extract clinical pathways from electronic medical records that encompass complete diagnostic and therapeutic logic, including temporal information, patient symptoms, diagnosis, and tests or treatments. To achieve this objective, we propose a novel clinical pathway representation: the clinical status pathway. We also design a LLM-based pipeline framework for extracting clinical status pathway from electronic medical records, with the core concept being to improve extraction accuracy by modeling the diagnostic and treatment processes. In our experiments, we apply this framework to construct a comprehensive breast cancer-specific clinical status pathway and evaluate its performance on medical question-answering and decision-support tasks, demonstrating significant improvements over traditional clinical pathways. The code is publicly available at https://github.com/finnchen11/EMRs2CSP.
2024
INTERS: Unlocking the Power of Large Language Models in Search with Instruction Tuning
Yutao Zhu
|
Peitian Zhang
|
Chenghao Zhang
|
Yifei Chen
|
Binyu Xie
|
Zheng Liu
|
Ji-Rong Wen
|
Zhicheng Dou
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Large language models (LLMs) have demonstrated impressive capabilities in various natural language processing tasks. Despite this, their application to information retrieval (IR) tasks is still challenging due to the infrequent occurrence of many IR-specific concepts in natural language. While prompt-based methods can provide task descriptions to LLMs, they often fall short in facilitating a comprehensive understanding and execution of IR tasks, thereby limiting LLMs’ applicability. To address this gap, in this work, we explore the potential of instruction tuning to enhance LLMs’ proficiency in IR tasks. We introduce a novel instruction tuning dataset, INTERS, encompassing 20 tasks across three fundamental IR categories: query understanding, document understanding, and query-document relationship understanding. The data are derived from 43 distinct datasets with manually written templates. Our empirical results reveal that INTERS significantly boosts the performance of various publicly available LLMs, such as LLaMA, Mistral, and Falcon, in IR tasks. Furthermore, we conduct extensive experiments to analyze the effects of instruction design, template diversity, few-shot demonstrations, and the volume of instructions on performance. We make our dataset and the fine-tuned models publicly accessible at https://github.com/DaoD/INTERS.
Search
Fix author
Co-authors
- Zhicheng Dou (窦志成) 1
- Ruihui Hou 1
- Zheng Liu 1
- Jingping Liu 1
- Tong Ruan 1
- show all...