Deniz Gorur


Fixing paper assignments

  1. Please select all papers that belong to the same person.
  2. Indicate below which author they should be assigned to.
Provide a valid ORCID iD here. This will be used to match future papers to this author.
Provide the name of the school or the university where the author has received or will receive their highest degree (e.g., Ph.D. institution for researchers, or current affiliation for students). This will be used to form the new author page ID, if needed.

TODO: "submit" and "cancel" buttons here


2025

pdf bib
Can Large Language Models perform Relation-based Argument Mining?
Deniz Gorur | Antonio Rago | Francesca Toni
Proceedings of the 31st International Conference on Computational Linguistics

Relation-based Argument Mining (RbAM) is the process of automatically determining agreement (support) and disagreement (attack) relations amongst textual arguments (in the binary prediction setting), or neither relation (in the ternary prediction setting). As the number of platforms supporting online debate increases, the need for RbAM becomes ever more urgent, especially in support of downstream tasks. RbAM is a challenging classification task, with existing state-of-the-art methods, based on Language Models (LMs), failing to perform satisfactorily across different datasets. In this paper, we show that general-purpose Large LMs (LLMs), appropriately primed and prompted, can significantly outperform the best performing (RoBERTa-based) baseline. Specifically, we experiment with two open-source LLMs (Llama-2 and Mistral) and with GPT-3.5-turbo on several datasets for (binary and ternary) RbAM, as well as with GPT-4o-mini on samples (to limit costs) from the datasets.