QA-prompting: Improving Summarization with Large Language Models using Question-Answering

Neelabh Sinha


Abstract
Language Models (LMs) have revolutionized natural language processing, enabling high-quality text generation through prompting and in-context learning. However, models often struggle with long-context summarization due to positional biases, leading to suboptimal extraction of critical information. There are techniques to improve this with fine-tuning, pipelining, or using complex techniques, which have their own challenges. To solve these challenges, we propose QA-prompting - a simple prompting method for summarization that utilizes question-answering as an intermediate step prior to summary generation. Our method extracts key information and enriches the context of text to mitigate positional biases and improve summarization in a single LM call per task without requiring fine-tuning or pipelining. Experiments on multiple datasets belonging to different domains using ten state-of-the-art pre-trained models demonstrate that QA-prompting outperforms baseline and other state-of-the-art methods, achieving up to 29% improvement in ROUGE scores. This provides an effective and scalable solution for summarization and highlights the importance of domain-specific question selection for optimal performance.
Anthology ID:
2025.newsum-main.14
Volume:
Proceedings of The 5th New Frontiers in Summarization Workshop
Month:
November
Year:
2025
Address:
Hybrid
Editors:
Yue Dong, Wen Xiao, Haopeng Zhang, Rui Zhang, Ori Ernst, Lu Wang, Fei Liu
Venues:
NewSum | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
199–212
Language:
URL:
https://preview.aclanthology.org/ingest-emnlp/2025.newsum-main.14/
DOI:
Bibkey:
Cite (ACL):
Neelabh Sinha. 2025. QA-prompting: Improving Summarization with Large Language Models using Question-Answering. In Proceedings of The 5th New Frontiers in Summarization Workshop, pages 199–212, Hybrid. Association for Computational Linguistics.
Cite (Informal):
QA-prompting: Improving Summarization with Large Language Models using Question-Answering (Sinha, NewSum 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-emnlp/2025.newsum-main.14.pdf