John Salvador
2025
Benchmarking LLMs on Semantic Overlap Summarization
John Salvador
|
Naman Bansal
|
Mousumi Akter
|
Souvika Sarkar
|
Anupam Das
|
Santu Karmaker
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Semantic Overlap Summarization (SOS) is a multi-document summarization task focused on extracting the common information shared cross alternative narratives which is a capability that is critical for trustworthy generation in domains such as news, law, and healthcare. We benchmark popular Large Language Models (LLMs) on SOS and introduce PrivacyPolicyPairs (3P), a new dataset of 135 high-quality samples from privacy policy documents, which complements existing resources and broadens domain coverage. Using the TELeR prompting taxonomy, we evaluate nearly one million LLM-generated summaries across two SOS datasets and conduct human evaluation on a curated subset. Our analysis reveals strong prompt sensitivity, identifies which automatic metrics align most closely with human judgments, and provides new baselines for future SOS research
LLMs as Meta-Reviewers’ Assistants: A Case Study
Eftekhar Hossain
|
Sanjeev Kumar Sinha
|
Naman Bansal
|
R. Alexander Knipper
|
Souvika Sarkar
|
John Salvador
|
Yash Mahajan
|
Sri Ram Pavan Kumar Guttikonda
|
Mousumi Akter
|
Md. Mahadi Hassan
|
Matthew Freestone
|
Matthew C. Williams Jr.
|
Dongji Feng
|
Santu Karmaker
Proceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)
One of the most important yet onerous tasks in the academic peer-reviewing process is composing meta-reviews, which involves assimilating diverse opinions from multiple expert peers, formulating one’s self-judgment as a senior expert, and then summarizing all these perspectives into a concise holistic overview to make an overall recommendation. This process is time-consuming and can be compromised by human factors like fatigue, inconsistency, missing tiny details, etc. Given the latest major developments in Large Language Models (LLMs), it is very compelling to rigorously study whether LLMs can help meta-reviewers perform this important task better. In this paper, we perform a case study with three popular LLMs, i.e., GPT-3.5, LLaMA2, and PaLM2, to assist meta-reviewers in better comprehending multiple experts’ perspectives by generating a controlled multi-perspective-summary (MPS) of their opinions. To achieve this, we prompt three LLMs with different types/levels of prompts based on the recently proposed TELeR taxonomy. Finally, we perform a detailed qualitative study of the MPSs generated by the LLMs and report our findings.