Abdelmalak at PerAnsSumm 2025: Leveraging a Domain-Specific BERT and LLaMA for Perspective-Aware Healthcare Answer Summarization

Abanoub Abdelmalak


Abstract
The PerAnsSumm Shared Task - CL4Health@NAACL 2025 aims to enhance healthcare community question-answering (CQA) by summarizing diverse user perspectives. It consists of two tasks: identifying and classifying perspective-specific spans (Task A) and generating structured, perspective-specific summaries from question-answer threads (Task B). The dataset used for this task is the PUMA dataset. For Task A, a COVID-Twitter-BERT model pre-trained on COVID-related text from Twitter was employed, improving the model’s understanding of relevant vocabulary and context. For Task B, LLaMA was utilized in a prompt-based fashion. The proposed approach achieved 9th place in Task A and 16th place overall, with the best proportional classification F1-score of 0.74.
Anthology ID:
2025.cl4health-1.39
Volume:
Proceedings of the Second Workshop on Patient-Oriented Language Processing (CL4Health)
Month:
May
Year:
2025
Address:
Albuquerque, New Mexico
Editors:
Sophia Ananiadou, Dina Demner-Fushman, Deepak Gupta, Paul Thompson
Venues:
CL4Health | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
428–436
Language:
URL:
https://preview.aclanthology.org/fix-sig-urls/2025.cl4health-1.39/
DOI:
Bibkey:
Cite (ACL):
Abanoub Abdelmalak. 2025. Abdelmalak at PerAnsSumm 2025: Leveraging a Domain-Specific BERT and LLaMA for Perspective-Aware Healthcare Answer Summarization. In Proceedings of the Second Workshop on Patient-Oriented Language Processing (CL4Health), pages 428–436, Albuquerque, New Mexico. Association for Computational Linguistics.
Cite (Informal):
Abdelmalak at PerAnsSumm 2025: Leveraging a Domain-Specific BERT and LLaMA for Perspective-Aware Healthcare Answer Summarization (Abdelmalak, CL4Health 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/fix-sig-urls/2025.cl4health-1.39.pdf