ArgInstruct: Specialized Instruction Fine-Tuning for Computational Argumentation

Maja Stahl, Timon Ziegenbein, Joonsuk Park, Henning Wachsmuth


Abstract
Training large language models (LLMs) to follow instructions has significantly enhanced their ability to tackle unseen tasks. However, despite their strong generalization capabilities, instruction-following LLMs encounter difficulties when dealing with tasks that require domain knowledge. This work introduces a specialized instruction fine-tuning for the domain of computational argumentation (CA). The goal is to enable an LLM to effectively tackle any unseen CA tasks while preserving its generalization capabilities. Reviewing existing CA research, we crafted natural language instructions for 105 CA tasks to this end. On this basis, we developed a CA-specific benchmark for LLMs that allows for a comprehensive evaluation of LLMs’ capabilities in solving various CA tasks. We synthesized 52k CA-related instructions, adapting the self-instruct process to train a CA-specialized instruction-following LLM. Our experiments suggest that CA-specialized instruction fine-tuning significantly enhances the LLM on both seen and unseen CA tasks. At the same time, performance on the general NLP tasks of the SuperNI benchmark remains stable.
Anthology ID:
2025.findings-acl.579
Volume:
Findings of the Association for Computational Linguistics: ACL 2025
Month:
July
Year:
2025
Address:
Vienna, Austria
Editors:
Wanxiang Che, Joyce Nabende, Ekaterina Shutova, Mohammad Taher Pilehvar
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
11103–11127
Language:
URL:
https://preview.aclanthology.org/display_plenaries/2025.findings-acl.579/
DOI:
Bibkey:
Cite (ACL):
Maja Stahl, Timon Ziegenbein, Joonsuk Park, and Henning Wachsmuth. 2025. ArgInstruct: Specialized Instruction Fine-Tuning for Computational Argumentation. In Findings of the Association for Computational Linguistics: ACL 2025, pages 11103–11127, Vienna, Austria. Association for Computational Linguistics.
Cite (Informal):
ArgInstruct: Specialized Instruction Fine-Tuning for Computational Argumentation (Stahl et al., Findings 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/display_plenaries/2025.findings-acl.579.pdf