LMU at PerAnsSumm 2025: LlaMA-in-the-loop at Perspective-Aware Healthcare Answer Summarization Task 2.2 Factuality

Tanalp Ağustoslu


Abstract
In this paper, we describe our submission for the shared task on Perspective-aware Healthcare Answer Summarization. Our system consists of two quantized models of the LlaMA family, applied across fine-tuning and few-shot settings. Additionally, we adopt the SumCoT prompting technique to improve the factual correctness of the generated summaries. We show that SumCoT yields more factually accurate summaries, even though this improvement comes at the expense of lower performance on lexical overlap and semantic similarity metrics such as ROUGE and BERTScore. Our work highlights an important trade-off when evaluating summarization models.
Anthology ID:
2025.cl4health-1.34
Volume:
Proceedings of the Second Workshop on Patient-Oriented Language Processing (CL4Health)
Month:
May
Year:
2025
Address:
Albuquerque, New Mexico
Editors:
Sophia Ananiadou, Dina Demner-Fushman, Deepak Gupta, Paul Thompson
Venues:
CL4Health | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
380–388
Language:
URL:
https://preview.aclanthology.org/fix-sig-urls/2025.cl4health-1.34/
DOI:
Bibkey:
Cite (ACL):
Tanalp Ağustoslu. 2025. LMU at PerAnsSumm 2025: LlaMA-in-the-loop at Perspective-Aware Healthcare Answer Summarization Task 2.2 Factuality. In Proceedings of the Second Workshop on Patient-Oriented Language Processing (CL4Health), pages 380–388, Albuquerque, New Mexico. Association for Computational Linguistics.
Cite (Informal):
LMU at PerAnsSumm 2025: LlaMA-in-the-loop at Perspective-Aware Healthcare Answer Summarization Task 2.2 Factuality (Ağustoslu, CL4Health 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/fix-sig-urls/2025.cl4health-1.34.pdf