HateModerate: Testing Hate Speech Detectors against Content Moderation Policies

Jiangrui Zheng, Xueqing Liu, Mirazul Haque, Xing Qian, Guanqun Yang, Wei Yang


Abstract
To protect users from massive hateful content, existing works studied automated hate speech detection. Despite the existing efforts, one question remains: Do automated hate speech detectors conform to social media content policies? A platform’s content policies are a checklist of content moderated by the social media platform. Because content moderation rules are often uniquely defined, existing hate speech datasets cannot directly answer this question. This work seeks to answer this question by creating HateModerate, a dataset for testing the behaviors of automated content moderators against content policies. First, we engage 28 annotators and GPT in a six-step annotation process, resulting in a list of hateful and non-hateful test suites matching each of Facebook’s 41 hate speech policies. Second, we test the performance of state-of-the-art hate speech detectors against HateModerate, revealing substantial failures these models have in their conformity to the policies. Third, using HateModerate, we augment the training data of a top-downloaded hate detector on HuggingFace. We observe significant improvement in the models’ conformity to content policies while having comparable scores on the original test data. Our dataset and code can be found on https://github.com/stevens-textmining/HateModerate.
Anthology ID:
2024.findings-naacl.172
Volume:
Findings of the Association for Computational Linguistics: NAACL 2024
Month:
June
Year:
2024
Address:
Mexico City, Mexico
Editors:
Kevin Duh, Helena Gomez, Steven Bethard
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
2691–2710
Language:
URL:
https://aclanthology.org/2024.findings-naacl.172
DOI:
Bibkey:
Cite (ACL):
Jiangrui Zheng, Xueqing Liu, Mirazul Haque, Xing Qian, Guanqun Yang, and Wei Yang. 2024. HateModerate: Testing Hate Speech Detectors against Content Moderation Policies. In Findings of the Association for Computational Linguistics: NAACL 2024, pages 2691–2710, Mexico City, Mexico. Association for Computational Linguistics.
Cite (Informal):
HateModerate: Testing Hate Speech Detectors against Content Moderation Policies (Zheng et al., Findings 2024)
Copy Citation:
PDF:
https://preview.aclanthology.org/nschneid-patch-3/2024.findings-naacl.172.pdf
Copyright:
 2024.findings-naacl.172.copyright.pdf