Antonio Rago
2025
Can Large Language Models perform Relation-based Argument Mining?
Deniz Gorur
|
Antonio Rago
|
Francesca Toni
Proceedings of the 31st International Conference on Computational Linguistics
Relation-based Argument Mining (RbAM) is the process of automatically determining agreement (support) and disagreement (attack) relations amongst textual arguments (in the binary prediction setting), or neither relation (in the ternary prediction setting). As the number of platforms supporting online debate increases, the need for RbAM becomes ever more urgent, especially in support of downstream tasks. RbAM is a challenging classification task, with existing state-of-the-art methods, based on Language Models (LMs), failing to perform satisfactorily across different datasets. In this paper, we show that general-purpose Large LMs (LLMs), appropriately primed and prompted, can significantly outperform the best performing (RoBERTa-based) baseline. Specifically, we experiment with two open-source LLMs (Llama-2 and Mistral) and with GPT-3.5-turbo on several datasets for (binary and ternary) RbAM, as well as with GPT-4o-mini on samples (to limit costs) from the datasets.
Evaluating Uncertainty Quantification Methods in Argumentative Large Language Models
Kevin Zhou
|
Adam Dejl
|
Gabriel Freedman
|
Lihu Chen
|
Antonio Rago
|
Francesca Toni
Findings of the Association for Computational Linguistics: EMNLP 2025
Research in uncertainty quantification (UQ) for large language models (LLMs) is increasingly important towards guaranteeing the reliability of this groundbreaking technology. We explore the integration of LLM UQ methods in argumentative LLMs (ArgLLMs), an explainable LLM framework for decision-making based on computational argumentation in which UQ plays a critical role. We conduct experiments to evaluate ArgLLMs’ performance on claim verification tasks when using different LLM UQ methods, inherently performing an assessment of the UQ methods’ effectiveness. Moreover, the experimental procedure itself is a novel way of evaluating the effectiveness of UQ methods, especially when intricate and potentially contentious statements are present. Our results demonstrate that, despite its simplicity, direct prompting is an effective UQ strategy in ArgLLMs, outperforming considerably more complex approaches.
Search
Fix author
Co-authors
- Francesca Toni 2
- Lihu Chen 1
- Adam Dejl 1
- Gabriel Freedman 1
- Deniz Gorur 1
- show all...