Jiali Cheng
2024
MedDec: A Dataset for Extracting Medical Decisions from Discharge Summaries
Mohamed Elgaar
|
Jiali Cheng
|
Nidhi Vakil
|
Hadi Amiri
|
Leo Anthony Celi
Findings of the Association for Computational Linguistics ACL 2024
Medical decisions directly impact individuals’ health and well-being. Extracting decision spans from clinical notes plays a crucial role in understanding medical decision-making processes. In this paper, we develop a new dataset called “MedDec,” which contains clinical notes of eleven different phenotypes (diseases) annotated by ten types of medical decisions. We introduce the task of medical decision extraction, aiming to jointly extract and classify different types of medical decisions within clinical notes. We provide a comprehensive analysis of the dataset, develop a span detection model as a baseline for this task, evaluate recent span detection approaches, and employ a few metrics to measure the complexity of data samples. Our findings shed light on the complexities inherent in clinical decision extraction and enable future work in this area of research. The dataset and code are available through https://github.com/CLU-UML/MedDec.
2023
Exploring the Impact of Model Scaling on Parameter-Efficient Tuning
Yusheng Su
|
Chi-Min Chan
|
Jiali Cheng
|
Yujia Qin
|
Yankai Lin
|
Shengding Hu
|
Zonghan Yang
|
Ning Ding
|
Xingzhi Sun
|
Guotong Xie
|
Zhiyuan Liu
|
Maosong Sun
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing
Parameter-efficient tuning (PET) methods can effectively drive extremely large pre-trained language models (PLMs) by training only minimal parameters. Different PET methods utilize different manually designed tunable modules. In small PLMs, there are usually noticeable performance differences among PET methods. Nevertheless, as the model scale increases, the performance differences become marginal. Hence, we hypothesize that model scaling mitigates the impact of design differences on PET methods. To investigate this hypothesis, we introduce a more flexible PET method called Arbitrary PET (APET) method. The APET method is compatible with a tunable module, which consists of any number of parameters distributed in arbitrary positions. Then, we utilize it and conduct experiments on 11 NLP tasks across 3 representative PLMs. Our investigations reveal that model scaling (1) mitigates the effects of the positions of tunable parameters on performance, and (2) enables tuning methods to achieve performance comparable to full-parameter fine-tuning by optimizing fewer tunable parameters. Intriguingly, we also observe that tuning methods optimize the similar number of tunable parameters to exceed random guess performance on different tasks. We collectively discuss this phenomenon and the two aforementioned findings from an optimization perspective to understand the underlying mechanisms. These conclusions enhance our understanding of the impact of model scaling on PET and assist in designing more effective and efficient PET methods for PLMs of different scales. The source code can be obtained from this GitHub repository: https://github.com/yushengsu-thu/PET_Scaling.
Search
Co-authors
- Mohamed Elgaar 1
- Nidhi Vakil 1
- Hadi Amiri 1
- Leo Anthony Celi 1
- Yusheng Su 1
- show all...