Capturing Covertly Toxic Speech via Crowdsourcing

Alyssa Lees, Daniel Borkan, Ian Kivlichan, Jorge Nario, Tesh Goyal


Abstract
We study the task of labeling covert or veiled toxicity in online conversations. Prior research has highlighted the difficulty in creating language models that recognize nuanced toxicity such as microaggressions. Our investigations further underscore the difficulty in parsing such labels reliably from raters via crowdsourcing. We introduce an initial dataset, COVERTTOXICITY, which aims to identify and categorize such comments from a refined rater template. Finally, we fine-tune a comment-domain BERT model to classify covertly offensive comments and compare against existing baselines.
Anthology ID:
2021.hcinlp-1.3
Volume:
Proceedings of the First Workshop on Bridging Human–Computer Interaction and Natural Language Processing
Month:
April
Year:
2021
Address:
Online
Editors:
Su Lin Blodgett, Michael Madaio, Brendan O'Connor, Hanna Wallach, Qian Yang
Venue:
HCINLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
14–20
Language:
URL:
https://aclanthology.org/2021.hcinlp-1.3
DOI:
Bibkey:
Cite (ACL):
Alyssa Lees, Daniel Borkan, Ian Kivlichan, Jorge Nario, and Tesh Goyal. 2021. Capturing Covertly Toxic Speech via Crowdsourcing. In Proceedings of the First Workshop on Bridging Human–Computer Interaction and Natural Language Processing, pages 14–20, Online. Association for Computational Linguistics.
Cite (Informal):
Capturing Covertly Toxic Speech via Crowdsourcing (Lees et al., HCINLP 2021)
Copy Citation:
PDF:
https://preview.aclanthology.org/add_acl24_videos/2021.hcinlp-1.3.pdf
Data
Civil CommentsSBIC