KHU_LDI at BioLaySumm2025: Fine-tuning and Refinement for Lay Radiology Report Generation

Nur Alya Dania Binti Moriazi, Mujeen Sung


Abstract
Though access to one’s own radiology reports has improved over the years, the use of complex medical terms makes understanding these reports difficult. To tackle this issue, we explored two approaches: supervised fine-tuning open-source large language models using QLoRA, and refinement, which improves a given generated output using feedback generated by a feedback model. Despite the fine-tuned model outperforming refinement on the test data, refinement showed good results on the validation set, thus showing good potential in the generation of lay radiology reports. Our submission achieved 2nd place in the open track of Subtask 2.1 of the BioLaySumm 2025 shared task.
Anthology ID:
2025.bionlp-share.31
Volume:
Proceedings of the 24th Workshop on Biomedical Language Processing (Shared Tasks)
Month:
August
Year:
2025
Address:
Vienna, Austria
Editors:
Sarvesh Soni, Dina Demner-Fushman
Venues:
BioNLP | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
256–268
Language:
URL:
https://preview.aclanthology.org/display_plenaries/2025.bionlp-share.31/
DOI:
Bibkey:
Cite (ACL):
Nur Alya Dania Binti Moriazi and Mujeen Sung. 2025. KHU_LDI at BioLaySumm2025: Fine-tuning and Refinement for Lay Radiology Report Generation. In Proceedings of the 24th Workshop on Biomedical Language Processing (Shared Tasks), pages 256–268, Vienna, Austria. Association for Computational Linguistics.
Cite (Informal):
KHU_LDI at BioLaySumm2025: Fine-tuning and Refinement for Lay Radiology Report Generation (Binti Moriazi & Sung, BioNLP 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/display_plenaries/2025.bionlp-share.31.pdf