Abstract
Abstractive summarization of medical dialogues presents a challenge for standard training approaches, given the paucity of suitable datasets. We explore the performance of state-of-the-art models with zero-shot and few-shot learning strategies and measure the impact of pretraining with general domain and dialogue-specific text on the summarization performance.- Anthology ID:
- 2022.naacl-srw.32
- Volume:
- Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: Student Research Workshop
- Month:
- July
- Year:
- 2022
- Address:
- Hybrid: Seattle, Washington + Online
- Venue:
- NAACL
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 254–266
- Language:
- URL:
- https://aclanthology.org/2022.naacl-srw.32
- DOI:
- 10.18653/v1/2022.naacl-srw.32
- Cite (ACL):
- David Fraile Navarro, Mark Dras, and Shlomo Berkovsky. 2022. Few-shot fine-tuning SOTA summarization models for medical dialogues. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: Student Research Workshop, pages 254–266, Hybrid: Seattle, Washington + Online. Association for Computational Linguistics.
- Cite (Informal):
- Few-shot fine-tuning SOTA summarization models for medical dialogues (Navarro et al., NAACL 2022)
- PDF:
- https://preview.aclanthology.org/remove-xml-comments/2022.naacl-srw.32.pdf