Debate-to-Detect: Reformulating Misinformation Detection as a Real-World Debate with Large Language Models

Chen Han, Wenzhen Zheng, Xijin Tang


Abstract
The proliferation of misinformation in digital platforms reveals the limitations of traditional detection methods, which mostly rely on static classification and fail to capture the intricate process of real-world fact-checking. Despite advancements in Large Language Models (LLMs) that enhance automated reasoning, their application to misinformation detection remains hindered by issues of logical inconsistency and superficial verification. Inspired by the idea that “Truth Becomes Clearer Through Debate”, we introduce Debate-to-Detect (D2D), a novel Multi-Agent Debate (MAD) framework that reformulates misinformation detection as a structured adversarial debate. Based on fact-checking workflows, D2D assigns domain-specific profiles to each agent and orchestrates a five-stage debate process, including Opening Statement, Rebuttal, Free Debate, Closing Statement, and Judgment. To transcend traditional binary classification, D2D introduces a multi-dimensional evaluation mechanism that assesses each claim across five distinct dimensions: Factuality, Source Reliability, Reasoning Quality, Clarity, and Ethics. Experiments with GPT-4o on two fakenews datasets demonstrate significant improvements over baseline methods, and the case study highlight D2D’s capability to iteratively refine evidence while improving decision transparency, representing a substantial advancement towards robust and interpretable misinformation detection. Our code is available at https://github.com/hanshenmesen/Debate-to-Detect
Anthology ID:
2025.emnlp-main.764
Volume:
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2025
Address:
Suzhou, China
Editors:
Christos Christodoulopoulos, Tanmoy Chakraborty, Carolyn Rose, Violet Peng
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
15125–15140
Language:
URL:
https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.764/
DOI:
Bibkey:
Cite (ACL):
Chen Han, Wenzhen Zheng, and Xijin Tang. 2025. Debate-to-Detect: Reformulating Misinformation Detection as a Real-World Debate with Large Language Models. In Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing, pages 15125–15140, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
Debate-to-Detect: Reformulating Misinformation Detection as a Real-World Debate with Large Language Models (Han et al., EMNLP 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.764.pdf
Checklist:
 2025.emnlp-main.764.checklist.pdf