Abstract
Large Language Models (LLMs) have shown significant performance in numerous NLP tasks, including summarization and controlled text generation. A notable capability of LLMs is in-context learning (ICL), where the model learns new tasks using input-output pairs in the prompt without any parameter update. However, the performance of LLMs in the context of few-shot abstractive dialogue summarization remains underexplored. This study evaluates various state-of-the-art LLMs on the SAMSum dataset within a few-shot framework. We assess these models in both controlled (entity control, length control, and person-focused planning) and uncontrolled settings, establishing a comprehensive benchmark in few-shot dialogue summarization. Our findings provide insights into summary quality and model controllability, offering a crucial reference for future research in dialogue summarization.- Anthology ID:
- 2023.newsum-1.6
- Volume:
- Proceedings of the 4th New Frontiers in Summarization Workshop
- Month:
- December
- Year:
- 2023
- Address:
- Singapore
- Editors:
- Yue Dong, Wen Xiao, Lu Wang, Fei Liu, Giuseppe Carenini
- Venue:
- NewSum
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 56–67
- Language:
- URL:
- https://preview.aclanthology.org/add_missing_videos/2023.newsum-1.6/
- DOI:
- 10.18653/v1/2023.newsum-1.6
- Cite (ACL):
- Yuting Tang, Ratish Puduppully, Zhengyuan Liu, and Nancy Chen. 2023. In-context Learning of Large Language Models for Controlled Dialogue Summarization: A Holistic Benchmark and Empirical Analysis. In Proceedings of the 4th New Frontiers in Summarization Workshop, pages 56–67, Singapore. Association for Computational Linguistics.
- Cite (Informal):
- In-context Learning of Large Language Models for Controlled Dialogue Summarization: A Holistic Benchmark and Empirical Analysis (Tang et al., NewSum 2023)
- PDF:
- https://preview.aclanthology.org/add_missing_videos/2023.newsum-1.6.pdf