Beyond Paraphrasing: Analyzing Summarization Abstractiveness and Reasoning

Nathan Zeweniuk, Ori Ernst, Jackie CK Cheung


Abstract
While there have been many studies analyzing the ability of LLMs to solve problems through reasoning, their application of reasoning in summarization remains largely unexamined. This study explores whether reasoning is essential to summarization by investigating three questions: (1) Do humans frequently use reasoning to generate new summary content? (2) Do summarization models exhibit the same reasoning patterns as humans? (3) Should summarization models integrate more complex reasoning abilities? Our findings reveal that while human summaries often contain reasoning-based information, system-generated summaries rarely contain this same information. This suggests that models struggle to effectively apply reasoning, even when it could improve summary quality. We advocate for the development of models that incorporate deeper reasoning and abstractiveness, and we release our annotated data to support future research.
Anthology ID:
2025.newsum-main.4
Volume:
Proceedings of The 5th New Frontiers in Summarization Workshop
Month:
November
Year:
2025
Address:
Hybrid
Editors:
Yue Dong, Wen Xiao, Haopeng Zhang, Rui Zhang, Ori Ernst, Lu Wang, Fei Liu
Venues:
NewSum | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
48–58
Language:
URL:
https://preview.aclanthology.org/ingest-emnlp/2025.newsum-main.4/
DOI:
Bibkey:
Cite (ACL):
Nathan Zeweniuk, Ori Ernst, and Jackie CK Cheung. 2025. Beyond Paraphrasing: Analyzing Summarization Abstractiveness and Reasoning. In Proceedings of The 5th New Frontiers in Summarization Workshop, pages 48–58, Hybrid. Association for Computational Linguistics.
Cite (Informal):
Beyond Paraphrasing: Analyzing Summarization Abstractiveness and Reasoning (Zeweniuk et al., NewSum 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-emnlp/2025.newsum-main.4.pdf