Zero-Shot Strategies for Length-Controllable Summarization

Fabian Retkowski, Alexander Waibel


Abstract
Large language models (LLMs) struggle with precise length control, particularly in zero-shot settings. We conduct a comprehensive study evaluating LLMs’ length control capabilities across multiple measures and propose practical methods to improve controllability. Our experiments with LLaMA 3 reveal stark differences in length adherence across measures and highlight inherent biases of the model. To address these challenges, we introduce a set of methods: length approximation, target adjustment, sample filtering, and automated revisions. By combining these methods, we demonstrate substantial improvements in length compliance while maintaining or enhancing summary quality, providing highly effective zero-shot strategies for precise length control without the need for model fine-tuning or architectural changes. With our work, we not only advance our understanding of LLM behavior in controlled text generation but also pave the way for more reliable and adaptable summarization systems in real-world applications.
Anthology ID:
2025.findings-naacl.34
Volume:
Findings of the Association for Computational Linguistics: NAACL 2025
Month:
April
Year:
2025
Address:
Albuquerque, New Mexico
Editors:
Luis Chiruzzo, Alan Ritter, Lu Wang
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
551–572
Language:
URL:
https://preview.aclanthology.org/landing_page/2025.findings-naacl.34/
DOI:
Bibkey:
Cite (ACL):
Fabian Retkowski and Alexander Waibel. 2025. Zero-Shot Strategies for Length-Controllable Summarization. In Findings of the Association for Computational Linguistics: NAACL 2025, pages 551–572, Albuquerque, New Mexico. Association for Computational Linguistics.
Cite (Informal):
Zero-Shot Strategies for Length-Controllable Summarization (Retkowski & Waibel, Findings 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/landing_page/2025.findings-naacl.34.pdf