Abstract
The automatic detection of hate speech online is an active research area in NLP. Most of the studies to date are based on social media datasets that contribute to the creation of hate speech detection models trained on them. However, data creation processes contain their own biases, and models inherently learn from these dataset-specific biases. In this paper, we perform a large-scale cross-dataset comparison where we fine-tune language models on different hate speech detection datasets. This analysis shows how some datasets are more generalizable than others when used as training data. Crucially, our experiments show how combining hate speech detection datasets can contribute to the development of robust hate speech detection models. This robustness holds even when controlling by data size and compared with the best individual datasets.- Anthology ID:
- 2023.woah-1.25
- Volume:
- The 7th Workshop on Online Abuse and Harms (WOAH)
- Month:
- July
- Year:
- 2023
- Address:
- Toronto, Canada
- Editors:
- Yi-ling Chung, Paul R{\"ottger}, Debora Nozza, Zeerak Talat, Aida Mostafazadeh Davani
- Venue:
- WOAH
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 231–242
- Language:
- URL:
- https://aclanthology.org/2023.woah-1.25
- DOI:
- 10.18653/v1/2023.woah-1.25
- Cite (ACL):
- Dimosthenis Antypas and Jose Camacho-Collados. 2023. Robust Hate Speech Detection in Social Media: A Cross-Dataset Empirical Evaluation. In The 7th Workshop on Online Abuse and Harms (WOAH), pages 231–242, Toronto, Canada. Association for Computational Linguistics.
- Cite (Informal):
- Robust Hate Speech Detection in Social Media: A Cross-Dataset Empirical Evaluation (Antypas & Camacho-Collados, WOAH 2023)
- PDF:
- https://preview.aclanthology.org/ingest-2024-clasp/2023.woah-1.25.pdf