Saama Technologies at BioLaySumm: Abstract based fine-tuned models with LoRA

Hwanmun Kim, Kamal raj Kanakarajan, Malaikannan Sankarasubbu


Abstract
Lay summarization of biomedical research articles is a challenging problem due to their use of technical terms and background knowledge requirements, despite the potential benefits of these research articles to the public. We worked on this problem as participating in BioLaySumm 2024. We experimented with various fine-tuning approaches to generate better lay summaries for biomedical research articles. After several experiments, we built a LoRA model with unsupervised fine-tuning based on the abstracts of the given articles, followed by a post-processing unit to take off repeated sentences. Our model was ranked 3rd overall in the BioLaySumm 2024 leaderboard. We analyzed the different approaches we experimented with and suggested several ideas to improve our model further.
Anthology ID:
2024.bionlp-1.72
Volume:
Proceedings of the 23rd Workshop on Biomedical Natural Language Processing
Month:
August
Year:
2024
Address:
Bangkok, Thailand
Editors:
Dina Demner-Fushman, Sophia Ananiadou, Makoto Miwa, Kirk Roberts, Junichi Tsujii
Venues:
BioNLP | WS
SIG:
SIGBIOMED
Publisher:
Association for Computational Linguistics
Note:
Pages:
786–792
Language:
URL:
https://aclanthology.org/2024.bionlp-1.72
DOI:
Bibkey:
Cite (ACL):
Hwanmun Kim, Kamal raj Kanakarajan, and Malaikannan Sankarasubbu. 2024. Saama Technologies at BioLaySumm: Abstract based fine-tuned models with LoRA. In Proceedings of the 23rd Workshop on Biomedical Natural Language Processing, pages 786–792, Bangkok, Thailand. Association for Computational Linguistics.
Cite (Informal):
Saama Technologies at BioLaySumm: Abstract based fine-tuned models with LoRA (Kim et al., BioNLP-WS 2024)
Copy Citation:
PDF:
https://preview.aclanthology.org/nschneid-patch-4/2024.bionlp-1.72.pdf