Incivility Detection in Online Comments

Farig Sadeque, Stephen Rains, Yotam Shmargad, Kate Kenski, Kevin Coe, Steven Bethard


Abstract
Incivility in public discourse has been a major concern in recent times as it can affect the quality and tenacity of the discourse negatively. In this paper, we present neural models that can learn to detect name-calling and vulgarity from a newspaper comment section. We show that in contrast to prior work on detecting toxic language, fine-grained incivilities like namecalling cannot be accurately detected by simple models like logistic regression. We apply the models trained on the newspaper comments data to detect uncivil comments in a Russian troll dataset, and find that despite the change of domain, the model makes accurate predictions.
Anthology ID:
S19-1031
Volume:
Proceedings of the Eighth Joint Conference on Lexical and Computational Semantics (*SEM 2019)
Month:
June
Year:
2019
Address:
Minneapolis, Minnesota
Venues:
SemEval | *SEM
SIGs:
SIGSEM | SIGLEX
Publisher:
Association for Computational Linguistics
Note:
Pages:
283–291
Language:
URL:
https://aclanthology.org/S19-1031
DOI:
10.18653/v1/S19-1031
Bibkey:
Cite (ACL):
Farig Sadeque, Stephen Rains, Yotam Shmargad, Kate Kenski, Kevin Coe, and Steven Bethard. 2019. Incivility Detection in Online Comments. In Proceedings of the Eighth Joint Conference on Lexical and Computational Semantics (*SEM 2019), pages 283–291, Minneapolis, Minnesota. Association for Computational Linguistics.
Cite (Informal):
Incivility Detection in Online Comments (Sadeque et al., SemEval-*SEM 2019)
Copy Citation:
PDF:
https://preview.aclanthology.org/nodalida-main-page/S19-1031.pdf