Preemptive Toxic Language Detection in Wikipedia Comments Using Thread-Level Context

Mladen Karan, Jan Šnajder

[How to correct problems with metadata yourself]


Abstract
We address the task of automatically detecting toxic content in user generated texts. We fo cus on exploring the potential for preemptive moderation, i.e., predicting whether a particular conversation thread will, in the future, incite a toxic comment. Moreover, we perform preliminary investigation of whether a model that jointly considers all comments in a conversation thread outperforms a model that considers only individual comments. Using an existing dataset of conversations among Wikipedia contributors as a starting point, we compile a new large-scale dataset for this task consisting of labeled comments and comments from their conversation threads.
Anthology ID:
W19-3514
Volume:
Proceedings of the Third Workshop on Abusive Language Online
Month:
August
Year:
2019
Address:
Florence, Italy
Editors:
Sarah T. Roberts, Joel Tetreault, Vinodkumar Prabhakaran, Zeerak Waseem
Venue:
ALW
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
129–134
Language:
URL:
https://aclanthology.org/W19-3514
DOI:
10.18653/v1/W19-3514
Bibkey:
Cite (ACL):
Mladen Karan and Jan Šnajder. 2019. Preemptive Toxic Language Detection in Wikipedia Comments Using Thread-Level Context. In Proceedings of the Third Workshop on Abusive Language Online, pages 129–134, Florence, Italy. Association for Computational Linguistics.
Cite (Informal):
Preemptive Toxic Language Detection in Wikipedia Comments Using Thread-Level Context (Karan & Šnajder, ALW 2019)
Copy Citation:
PDF:
https://preview.aclanthology.org/teach-a-man-to-fish/W19-3514.pdf