Let’s discuss! Quality Dimensions and Annotated Datasets for Computational Argument Quality Assessment

Rositsa V Ivanova, Thomas Huber, Christina Niklaus


Abstract
Research in the computational assessment of Argumentation Quality has gained popularity over the last ten years. Various quality dimensions have been explored through the creation of domain-specific datasets and assessment methods. We survey the related literature (211 publications and 32 datasets), while addressing potential overlaps and blurry boundaries to related domains. This paper provides a representative overview of the state of the art in Computational Argument Quality Assessment with a focus on quality dimensions and annotated datasets. The aim of the survey is to identify research gaps and to aid future discussions and work in the domain.
Anthology ID:
2024.emnlp-main.1155
Volume:
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2024
Address:
Miami, Florida, USA
Editors:
Yaser Al-Onaizan, Mohit Bansal, Yun-Nung Chen
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
20749–20779
Language:
URL:
https://preview.aclanthology.org/jlcl-multiple-ingestion/2024.emnlp-main.1155/
DOI:
10.18653/v1/2024.emnlp-main.1155
Bibkey:
Cite (ACL):
Rositsa V Ivanova, Thomas Huber, and Christina Niklaus. 2024. Let’s discuss! Quality Dimensions and Annotated Datasets for Computational Argument Quality Assessment. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, pages 20749–20779, Miami, Florida, USA. Association for Computational Linguistics.
Cite (Informal):
Let’s discuss! Quality Dimensions and Annotated Datasets for Computational Argument Quality Assessment (Ivanova et al., EMNLP 2024)
Copy Citation:
PDF:
https://preview.aclanthology.org/jlcl-multiple-ingestion/2024.emnlp-main.1155.pdf