ArgCMV: An Argument Summarization Benchmark for the LLM-era

Omkar Gurjar, Agam Goyal, Eshwar Chandrasekharan


Abstract
Key point extraction is an important task in argument summarization which involves extracting high-level short summaries from arguments. Existing approaches for KP extraction have been mostly evaluated on the popular ArgKP21 dataset. In this paper, we highlight some of the major limitations of the ArgKP21 dataset and demonstrate the need for new benchmarks that are more representative of actual human conversations. Using SoTA large language models (LLMs), we curate a new argument key point extraction dataset called ArgCMV comprising of ∼12K arguments from actual online human debates spread across ∼3K topics. Our dataset exhibits higher complexity such as longer, co-referencing arguments, higher presence of subjective discourse units, and a larger range of topics over ArgKP21. We show that existing methods do not adapt well to ArgCMV and provide extensive benchmark results by experimenting with existing baselines and latest open source models. This work introduces a novel KP extraction dataset for long-context online discussions, setting the stage for the next generation of LLM-driven summarization research.
Anthology ID:
2025.emnlp-main.1110
Volume:
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2025
Address:
Suzhou, China
Editors:
Christos Christodoulopoulos, Tanmoy Chakraborty, Carolyn Rose, Violet Peng
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
21881–21894
Language:
URL:
https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.1110/
DOI:
Bibkey:
Cite (ACL):
Omkar Gurjar, Agam Goyal, and Eshwar Chandrasekharan. 2025. ArgCMV: An Argument Summarization Benchmark for the LLM-era. In Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing, pages 21881–21894, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
ArgCMV: An Argument Summarization Benchmark for the LLM-era (Gurjar et al., EMNLP 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.1110.pdf
Checklist:
 2025.emnlp-main.1110.checklist.pdf