Taisei Ozaki


2025

pdf bib
LLM DEBATE OPPONENT : Counter-argument Generation focusing on Implicit and Critical Premises
Taisei Ozaki | Chihiro Nakagawa | Naoya Inoue | Shoichi Naito | Kenshi Yamaguchi
Proceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 4: Student Research Workshop)

Debate education fosters critical thinking skills but often incurs high human costs. Recent advancements in Large Language Models (LLMs) show promise in automating counter-argument generation. However, it remains unclear how best to guide LLMs to target both implicit and critical premises. In this study, we systematically compare multi-step and one-step generation methods for counter-arguments across 100 debate topics. Our findings reveal that one-step approaches consistently outperform multi-step pipelines, owing to their better grasp of the “motion spirit,” minimized propagation of hallucinations, and avoidance of challenging intermediate tasks. Among premise-targeting methods, a one-step strategy that accounts for both implicit and explicit premises—Generated and Targeted Premise Attack (GTG)—emerges as the strongest performer in expert and automated evaluations. These results highlight the value of direct, integrated prompts for leveraging LLMs in complex argumentation tasks and offer insights for developing more effective automated debate agents.