Dynamic Knowledge Integration for Evidence-Driven Counter-Argument Generation with Large Language Models

Anar Yeginbergen, Maite Oronoz, Rodrigo Agerri


Abstract
This paper investigates the role of dynamic external knowledge integration in improving counter-argument generation using Large Language Models (LLMs). While LLMs have shown promise in argumentative tasks, their tendency to generate lengthy, potentially non-factual responses highlights the need for more controlled and evidence-based approaches. We introduce a reconstructed and manually curated dataset of argument and counter-argument pairs specifically designed to balance argumentative complexity with evaluative feasibility. We also propose a new LLM-as-a-Judge evaluation methodology that shows a stronger correlation with human judgments compared to traditional reference-based metrics. Our experimental results demonstrate that integrating dynamic external knowledge from the web significantly improves the quality of generated counter-arguments, particularly in terms of relatedness, persuasiveness, and factuality. The findings suggest that combining LLMs with real-time external knowledge retrieval offers a promising direction for developing more effective and reliable counter-argumentation systems. Data and code are publicly available: https://github.com/anaryegen/ counter-argument-generation
Anthology ID:
2025.findings-acl.1161
Volume:
Findings of the Association for Computational Linguistics: ACL 2025
Month:
July
Year:
2025
Address:
Vienna, Austria
Editors:
Wanxiang Che, Joyce Nabende, Ekaterina Shutova, Mohammad Taher Pilehvar
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
22568–22584
Language:
URL:
https://preview.aclanthology.org/transition-to-people-yaml/2025.findings-acl.1161/
DOI:
10.18653/v1/2025.findings-acl.1161
Bibkey:
Cite (ACL):
Anar Yeginbergen, Maite Oronoz, and Rodrigo Agerri. 2025. Dynamic Knowledge Integration for Evidence-Driven Counter-Argument Generation with Large Language Models. In Findings of the Association for Computational Linguistics: ACL 2025, pages 22568–22584, Vienna, Austria. Association for Computational Linguistics.
Cite (Informal):
Dynamic Knowledge Integration for Evidence-Driven Counter-Argument Generation with Large Language Models (Yeginbergen et al., Findings 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/transition-to-people-yaml/2025.findings-acl.1161.pdf