Reranking-based Generation for Unbiased Perspective Summarization

Narutatsu Ri, Nicholas Deas, Kathleen McKeown


Abstract
Generating unbiased summaries in real-world settings such as political perspective summarization remains a crucial application of Large Language Models (LLMs). Yet, existing evaluation frameworks rely on traditional metrics for measuring key attributes such as coverage and faithfulness without verifying their applicability, and efforts to develop improved summarizers are still nascent. We address these gaps by (1) identifying reliable metrics for measuring perspective summary quality, and (2) investigating the efficacy of LLM-based methods beyond zero-shot inference. Namely, we build a test set for benchmarking metric reliability using human annotations and show that traditional metrics underperform compared to language model–based metrics, which prove to be strong evaluators. Using these metrics, we show that reranking-based methods yield strong results, and preference tuning with synthetically generated and reranking-labeled data further boosts performance. Our findings aim to contribute to the reliable evaluation and development of perspective summarization methods.
Anthology ID:
2025.findings-acl.1268
Volume:
Findings of the Association for Computational Linguistics: ACL 2025
Month:
July
Year:
2025
Address:
Vienna, Austria
Editors:
Wanxiang Che, Joyce Nabende, Ekaterina Shutova, Mohammad Taher Pilehvar
Venues:
Findings | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
24701–24723
Language:
URL:
https://preview.aclanthology.org/acl25-workshop-ingestion/2025.findings-acl.1268/
DOI:
Bibkey:
Cite (ACL):
Narutatsu Ri, Nicholas Deas, and Kathleen McKeown. 2025. Reranking-based Generation for Unbiased Perspective Summarization. In Findings of the Association for Computational Linguistics: ACL 2025, pages 24701–24723, Vienna, Austria. Association for Computational Linguistics.
Cite (Informal):
Reranking-based Generation for Unbiased Perspective Summarization (Ri et al., Findings 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/acl25-workshop-ingestion/2025.findings-acl.1268.pdf