SUWMIT at BioLaySumm2025: Instruction-based Summarization with Contrastive Decoding

Priyam Basu, Jose Cols, Daniel Jarvis, Yongsin Park, Daniel Rodabaugh


Abstract
In the following paper, we present our team’s approach to subtask 1.1 of the BioLaySumm 2025 shared task, which entails the automated generation of lay summaries from biomedical articles. To this end, we experiment with a variety of methods for text preprocessing, extractive summarization, model fine-tuning, and abstractive summarization. Our final results are generated on a fine-tuned Llama 3.1 Instruct (8B) model, notably achieving top scores on two out of four relevance metrics, as well as the highest overall ranking among this year’s participating teams on the plain lay summarization subtask.
Anthology ID:
2025.bionlp-share.29
Volume:
BioNLP 2025 Shared Tasks
Month:
August
Year:
2025
Address:
Vienna, Austria
Editors:
Sarvesh Soni, Dina Demner-Fushman
Venues:
BioNLP | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
240–248
Language:
URL:
https://preview.aclanthology.org/acl25-workshop-ingestion/2025.bionlp-share.29/
DOI:
Bibkey:
Cite (ACL):
Priyam Basu, Jose Cols, Daniel Jarvis, Yongsin Park, and Daniel Rodabaugh. 2025. SUWMIT at BioLaySumm2025: Instruction-based Summarization with Contrastive Decoding. In BioNLP 2025 Shared Tasks, pages 240–248, Vienna, Austria. Association for Computational Linguistics.
Cite (Informal):
SUWMIT at BioLaySumm2025: Instruction-based Summarization with Contrastive Decoding (Basu et al., BioNLP 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/acl25-workshop-ingestion/2025.bionlp-share.29.pdf