Parameter-Efficient Fine-Tuning of LLaMA for the Clinical Domain

Aryo Gema, Pasquale Minervini, Luke Daines, Tom Hope, Beatrice Alex


Abstract
Adapting pretrained language models to novel domains, such as clinical applications, traditionally involves retraining their entire set of parameters. Parameter-Efficient Fine-Tuning (PEFT) techniques for fine-tuning language models significantly reduce computational requirements by selectively fine-tuning small subsets of parameters. In this study, we propose a two-step PEFT framework and evaluate it in the clinical domain. Our approach combines a specialised PEFT adapter layer designed for clinical domain adaptation with another adapter specialised for downstream tasks. We evaluate the framework on multiple clinical outcome prediction datasets, comparing it to clinically trained language models. Our framework achieves a better AUROC score averaged across all clinical downstream tasks compared to clinical language models. In particular, we observe large improvements of 4-5% AUROC in large-scale multilabel classification tasks, such as diagnoses and procedures classification. To our knowledge, this study is the first to provide an extensive empirical analysis of the interplay between PEFT techniques and domain adaptation in an important real-world domain of clinical applications.
Anthology ID:
2024.clinicalnlp-1.9
Volume:
Proceedings of the 6th Clinical Natural Language Processing Workshop
Month:
June
Year:
2024
Address:
Mexico City, Mexico
Editors:
Tristan Naumann, Asma Ben Abacha, Steven Bethard, Kirk Roberts, Danielle Bitterman
Venues:
ClinicalNLP | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
91–104
Language:
URL:
https://aclanthology.org/2024.clinicalnlp-1.9
DOI:
Bibkey:
Cite (ACL):
Aryo Gema, Pasquale Minervini, Luke Daines, Tom Hope, and Beatrice Alex. 2024. Parameter-Efficient Fine-Tuning of LLaMA for the Clinical Domain. In Proceedings of the 6th Clinical Natural Language Processing Workshop, pages 91–104, Mexico City, Mexico. Association for Computational Linguistics.
Cite (Informal):
Parameter-Efficient Fine-Tuning of LLaMA for the Clinical Domain (Gema et al., ClinicalNLP-WS 2024)
Copy Citation:
PDF:
https://preview.aclanthology.org/jeptaln-2024-ingestion/2024.clinicalnlp-1.9.pdf