Abstract
Growing amount of comments make online discussions difficult to moderate by human moderators only. Antisocial behavior is a common occurrence that often discourages other users from participating in discussion. We propose a neural network based method that partially automates the moderation process. It consists of two steps. First, we detect inappropriate comments for moderators to see. Second, we highlight inappropriate parts within these comments to make the moderation faster. We evaluated our method on data from a major Slovak news discussion platform.- Anthology ID:
- W18-5108
- Volume:
- Proceedings of the 2nd Workshop on Abusive Language Online (ALW2)
- Month:
- October
- Year:
- 2018
- Address:
- Brussels, Belgium
- Editors:
- Darja Fišer, Ruihong Huang, Vinodkumar Prabhakaran, Rob Voigt, Zeerak Waseem, Jacqueline Wernimont
- Venue:
- ALW
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 60–65
- Language:
- URL:
- https://preview.aclanthology.org/icon-24-ingestion/W18-5108/
- DOI:
- 10.18653/v1/W18-5108
- Cite (ACL):
- Andrej Švec, Matúš Pikuliak, Marián Šimko, and Mária Bieliková. 2018. Improving Moderation of Online Discussions via Interpretable Neural Models. In Proceedings of the 2nd Workshop on Abusive Language Online (ALW2), pages 60–65, Brussels, Belgium. Association for Computational Linguistics.
- Cite (Informal):
- Improving Moderation of Online Discussions via Interpretable Neural Models (Švec et al., ALW 2018)
- PDF:
- https://preview.aclanthology.org/icon-24-ingestion/W18-5108.pdf