Fair and Argumentative Language Modeling for Computational Argumentation

Carolin Holtermann, Anne Lauscher, Simone Ponzetto


Abstract
Although much work in NLP has focused on measuring and mitigating stereotypical bias in semantic spaces, research addressing bias in computational argumentation is still in its infancy. In this paper, we address this research gap and conduct a thorough investigation of bias in argumentative language models. To this end, we introduce ABBA, a novel resource for bias measurement specifically tailored to argumentation. We employ our resource to assess the effect of argumentative fine-tuning and debiasing on the intrinsic bias found in transformer-based language models using a lightweight adapter-based approach that is more sustainable and parameter-efficient than full fine-tuning. Finally, we analyze the potential impact of language model debiasing on the performance in argument quality prediction, a downstream task of computational argumentation. Our results show that we are able to successfully and sustainably remove bias in general and argumentative language models while preserving (and sometimes improving) model performance in downstream tasks. We make all experimental code and data available at https://github.com/umanlp/FairArgumentativeLM.
Anthology ID:
2022.acl-long.541
Volume:
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Month:
May
Year:
2022
Address:
Dublin, Ireland
Editors:
Smaranda Muresan, Preslav Nakov, Aline Villavicencio
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
7841–7861
Language:
URL:
https://aclanthology.org/2022.acl-long.541
DOI:
10.18653/v1/2022.acl-long.541
Bibkey:
Cite (ACL):
Carolin Holtermann, Anne Lauscher, and Simone Ponzetto. 2022. Fair and Argumentative Language Modeling for Computational Argumentation. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 7841–7861, Dublin, Ireland. Association for Computational Linguistics.
Cite (Informal):
Fair and Argumentative Language Modeling for Computational Argumentation (Holtermann et al., ACL 2022)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-acl-2023-videos/2022.acl-long.541.pdf
Software:
 2022.acl-long.541.software.zip
Video:
 https://preview.aclanthology.org/ingest-acl-2023-videos/2022.acl-long.541.mp4
Code
 umanlp/fairargumentativelm