Multi-Agent LLM Debate Unveils the Premise Left Unsaid

Harvey Bonmu Ku, Jeongyeol Shin, Hyoun Jun Lee, Seonok Na, Insu Jeon


Abstract
Implicit premise is central to argumentative coherence and faithfulness, yet remain elusive in traditional single-pass computational models. We introduce a multi-agent framework that casts implicit premise recovery as a dialogic reasoning task between two LLM agents. Through structured rounds of debate, agents critically evaluate competing premises and converge on the most contextually appropriate interpretation. Evaluated on a controlled binary classification benchmark for premise selection, our approach achieves state-of-the-art accuracy, outperforming both neural baselines and single-agent LLMs. We find that accuracy gains stem not from repeated generation, but from agents refining their predictions in response to opposing views. Moreover, we show that forcing models to defend assigned stances degrades performance—engendering rhetorical rigidity to flawed reasoning. These results underscore the value of interactive debate in revealing pragmatic components of argument structure.
Anthology ID:
2025.argmining-1.6
Volume:
Proceedings of the 12th Argument mining Workshop
Month:
July
Year:
2025
Address:
Vienna, Austria
Editors:
Elena Chistova, Philipp Cimiano, Shohreh Haddadan, Gabriella Lapesa, Ramon Ruiz-Dolz
Venues:
ArgMining | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
58–73
Language:
URL:
https://preview.aclanthology.org/display_plenaries/2025.argmining-1.6/
DOI:
Bibkey:
Cite (ACL):
Harvey Bonmu Ku, Jeongyeol Shin, Hyoun Jun Lee, Seonok Na, and Insu Jeon. 2025. Multi-Agent LLM Debate Unveils the Premise Left Unsaid. In Proceedings of the 12th Argument mining Workshop, pages 58–73, Vienna, Austria. Association for Computational Linguistics.
Cite (Informal):
Multi-Agent LLM Debate Unveils the Premise Left Unsaid (Ku et al., ArgMining 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/display_plenaries/2025.argmining-1.6.pdf