@inproceedings{malik-etal-2024-hgp,
    title = "{HGP}-{NLP} at {B}io{L}ay{S}umm: Leveraging {L}o{RA} for Lay Summarization of Biomedical Research Articles using {S}eq2{S}eq Transformers",
    author = "Malik, Hemang  and
      Pradeep, Gaurav  and
      Seth, Pratinav",
    editor = "Demner-Fushman, Dina  and
      Ananiadou, Sophia  and
      Miwa, Makoto  and
      Roberts, Kirk  and
      Tsujii, Junichi",
    booktitle = "Proceedings of the 23rd Workshop on Biomedical Natural Language Processing",
    month = aug,
    year = "2024",
    address = "Bangkok, Thailand",
    publisher = "Association for Computational Linguistics",
    url = "https://preview.aclanthology.org/ingest-emnlp/2024.bionlp-1.78/",
    doi = "10.18653/v1/2024.bionlp-1.78",
    pages = "831--836",
    abstract = "Lay summarization aims to generate summaries of technical articles for non-experts, enabling easy comprehension for a general audience. The technical language used in research often hinders effective communication of scientific knowledge, making it difficult for non-experts to understand. Automatic lay summarization can enhance access to scientific literature, promoting interdisciplinary knowledge sharing and public understanding. This has become especially important for biomedical articles, given the current global need for clear medical information. Large Language Models (LLMs), with their remarkable language understanding capabilities, are ideal for abstractive summarization, helping to make complex information accessible to the public. This paper details our submissions to the BioLaySumm 2024 Shared Task: Lay Summarization of Biomedical Research Articles. We fine-tune and evaluate sequence-to-sequence models like T5 across various training dataset settings and optimization methods such as LoRA for lay summarization. Our submission achieved the 53rd position overall."
}Markdown (Informal)
[HGP-NLP at BioLaySumm: Leveraging LoRA for Lay Summarization of Biomedical Research Articles using Seq2Seq Transformers](https://preview.aclanthology.org/ingest-emnlp/2024.bionlp-1.78/) (Malik et al., BioNLP 2024)
ACL