Principled Content Selection to Generate Diverse and Personalized Multi-Document Summaries

Vishakh Padmakumar, Zichao Wang, David Arbour, Jennifer Healey


Abstract
While large language models (LLMs) are increasingly capable of handling longer contexts, recent work has demonstrated that they exhibit the _”lost in the middle”_ phenomenon (Liu et al., 2024) of unevenly attending to different parts of the provided context. This hinders their ability to cover diverse source material in multi-document summarization, as noted in the DiverseSumm benchmark (Huang et al., 2024). In this work, we contend that principled content selection is a simple way to increase source coverage on this task. As opposed to prompting an LLM to perform the summarization in a single step, we explicitly divide the task into three steps—(1) reducing document collections to atomic key points, (2) using determinantal point processes (DPP) to perform select key points that prioritize diverse content, and (3) rewriting to the final summary. By combining prompting steps, for extraction and rewriting, with principled techniques, for content selection, we consistently improve source coverage on the DiverseSumm benchmark across various LLMs. Finally, we also show that by incorporating relevance to a provided user intent into the DPP kernel, we can generate _personalized_ summaries that cover _relevant_ source information while retaining coverage.
Anthology ID:
2025.acl-long.1445
Volume:
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Month:
July
Year:
2025
Address:
Vienna, Austria
Editors:
Wanxiang Che, Joyce Nabende, Ekaterina Shutova, Mohammad Taher Pilehvar
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
29884–29899
Language:
URL:
https://preview.aclanthology.org/ingestion-acl-25/2025.acl-long.1445/
DOI:
Bibkey:
Cite (ACL):
Vishakh Padmakumar, Zichao Wang, David Arbour, and Jennifer Healey. 2025. Principled Content Selection to Generate Diverse and Personalized Multi-Document Summaries. In Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 29884–29899, Vienna, Austria. Association for Computational Linguistics.
Cite (Informal):
Principled Content Selection to Generate Diverse and Personalized Multi-Document Summaries (Padmakumar et al., ACL 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingestion-acl-25/2025.acl-long.1445.pdf