Abstract
The goal of any social media platform is to facilitate healthy and meaningful interactions among its users. But more often than not, it has been found that it becomes an avenue for wanton attacks. We propose an experimental study that has three aims: 1) to provide us with a deeper understanding of current data sets that focus on different types of abusive language, which are sometimes overlapping (racism, sexism, hate speech, offensive language, and personal attacks); 2) to investigate what type of attention mechanism (contextual vs. self-attention) is better for abusive language detection using deep learning architectures; and 3) to investigate whether stacked architectures provide an advantage over simple architectures for this task.- Anthology ID:
- W19-3508
- Volume:
- Proceedings of the Third Workshop on Abusive Language Online
- Month:
- August
- Year:
- 2019
- Address:
- Florence, Italy
- Editors:
- Sarah T. Roberts, Joel Tetreault, Vinodkumar Prabhakaran, Zeerak Waseem
- Venue:
- ALW
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 70–79
- Language:
- URL:
- https://aclanthology.org/W19-3508
- DOI:
- 10.18653/v1/W19-3508
- Cite (ACL):
- Tuhin Chakrabarty, Kilol Gupta, and Smaranda Muresan. 2019. Pay “Attention” to your Context when Classifying Abusive Language. In Proceedings of the Third Workshop on Abusive Language Online, pages 70–79, Florence, Italy. Association for Computational Linguistics.
- Cite (Informal):
- Pay “Attention” to your Context when Classifying Abusive Language (Chakrabarty et al., ALW 2019)
- PDF:
- https://preview.aclanthology.org/ingest-acl-2023-videos/W19-3508.pdf