StrucSum: Graph-Structured Reasoning for Long Document Extractive Summarization with LLMs

Haohan Yuan, Sukhwa Hong, Haopeng Zhang


Abstract
Large language models (LLMs) have shown strong performance in zero-shot summarization, but often struggle to model document structure and identify salient information in long texts. In this work, we introduce StrucSum, a training-free prompting framework that enhances LLM reasoning through sentence-level graph structures. StrucSum injects structural signals into prompts via three targeted strategies: Neighbor-Aware Prompting (NAP) for local context, Centrality-Aware Prompting (CAP) for importance estimation, and Centrality-Guided Masking (CGM) for efficient input reduction. Experiments on ArXiv, PubMed, and Multi-News demonstrate that StrucSum consistently improves both summary quality and factual consistency over unsupervised baselines and vanilla prompting. In particular, on ArXiv, it increases FactCC and SummaC by 19.2% and 8.0% points, demonstrating stronger alignment between summaries and source content. The ablation study shows that the combination of multiple strategies does not yield clear performance gains; therefore, structure-aware prompting with graph-based information represents a promising and underexplored direction for the advancement of zero-shot extractive summarization with LLMs.
Anthology ID:
2026.findings-eacl.192
Volume:
Findings of the Association for Computational Linguistics: EACL 2026
Month:
March
Year:
2026
Address:
Rabat, Morocco
Editors:
Vera Demberg, Kentaro Inui, Lluís Marquez
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
3708–3721
Language:
URL:
https://preview.aclanthology.org/ingest-eacl/2026.findings-eacl.192/
DOI:
Bibkey:
Cite (ACL):
Haohan Yuan, Sukhwa Hong, and Haopeng Zhang. 2026. StrucSum: Graph-Structured Reasoning for Long Document Extractive Summarization with LLMs. In Findings of the Association for Computational Linguistics: EACL 2026, pages 3708–3721, Rabat, Morocco. Association for Computational Linguistics.
Cite (Informal):
StrucSum: Graph-Structured Reasoning for Long Document Extractive Summarization with LLMs (Yuan et al., Findings 2026)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-eacl/2026.findings-eacl.192.pdf
Checklist:
 2026.findings-eacl.192.checklist.pdf