Can Large Language Models Understand Argument Schemes?

Elfia Bezou-Vrakatseli, Oana Cocarascu, Sanjay Modgil


Abstract
Argument schemes represent stereotypical patterns of reasoning that occur in everyday arguments. However, despite their usefulness, argument scheme classification, that is classifying natural language arguments according to the schemes they are instances of, is an under-explored task in NLP. In this paper we present a systematic evaluation of large language models (LLMs) for classifying argument schemes based on Walton’s taxonomy. We experiment with seven LLMs in zero-shot, few-shot, and chain-of-thought prompting, and explore two strategies to enhance task instructions: employing formal definitions and LLM-generated descriptions. Our analysis on both manually annotated and automatically generated arguments, including enthymemes, indicates that while larger models exhibit satisfactory performance in identifying argument schemes, challenges remain for smaller models. Our work offers the first comprehensive assessment of LLMs in identifying argument schemes, and provides insights for advancing reasoning capabilities in computational argumentation.
Anthology ID:
2025.findings-acl.702
Volume:
Findings of the Association for Computational Linguistics: ACL 2025
Month:
July
Year:
2025
Address:
Vienna, Austria
Editors:
Wanxiang Che, Joyce Nabende, Ekaterina Shutova, Mohammad Taher Pilehvar
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
13666–13681
Language:
URL:
https://preview.aclanthology.org/mtsummit-25-ingestion/2025.findings-acl.702/
DOI:
10.18653/v1/2025.findings-acl.702
Bibkey:
Cite (ACL):
Elfia Bezou-Vrakatseli, Oana Cocarascu, and Sanjay Modgil. 2025. Can Large Language Models Understand Argument Schemes?. In Findings of the Association for Computational Linguistics: ACL 2025, pages 13666–13681, Vienna, Austria. Association for Computational Linguistics.
Cite (Informal):
Can Large Language Models Understand Argument Schemes? (Bezou-Vrakatseli et al., Findings 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/mtsummit-25-ingestion/2025.findings-acl.702.pdf