Abstract
This paper presents our system at the Radiology Report Summarization Shared Task-1B of the 22nd BioNLP Workshop 2023. Inspired by the work of the BioBART model, we continuously pre-trained a general domain BART model with biomedical data to adapt it to this specific domain. In the pre-training phase, several pre-training tasks are aggregated to inject linguistic knowledge and increase the abstractivity of the generated summaries. We present the results of our models, and also, we have carried out an additional study on the lengths of the generated summaries, which has provided us with interesting information.- Anthology ID:
- 2023.bionlp-1.52
- Volume:
- The 22nd Workshop on Biomedical Natural Language Processing and BioNLP Shared Tasks
- Month:
- July
- Year:
- 2023
- Address:
- Toronto, Canada
- Editors:
- Dina Demner-fushman, Sophia Ananiadou, Kevin Cohen
- Venue:
- BioNLP
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 524–529
- Language:
- URL:
- https://aclanthology.org/2023.bionlp-1.52
- DOI:
- 10.18653/v1/2023.bionlp-1.52
- Cite (ACL):
- Vicent Ahuir Esteve, Encarna Segarra, and Lluis Hurtado. 2023. ELiRF-VRAIN at BioNLP Task 1B: Radiology Report Summarization. In The 22nd Workshop on Biomedical Natural Language Processing and BioNLP Shared Tasks, pages 524–529, Toronto, Canada. Association for Computational Linguistics.
- Cite (Informal):
- ELiRF-VRAIN at BioNLP Task 1B: Radiology Report Summarization (Ahuir Esteve et al., BioNLP 2023)
- PDF:
- https://preview.aclanthology.org/dois-2013-emnlp/2023.bionlp-1.52.pdf