Contextualizing Hate Speech Classifiers with Post-hoc Explanation

Brendan Kennedy, Xisen Jin, Aida Mostafazadeh Davani, Morteza Dehghani, Xiang Ren


Abstract
Hate speech classifiers trained on imbalanced datasets struggle to determine if group identifiers like “gay” or “black” are used in offensive or prejudiced ways. Such biases manifest in false positives when these identifiers are present, due to models’ inability to learn the contexts which constitute a hateful usage of identifiers. We extract post-hoc explanations from fine-tuned BERT classifiers to detect bias towards identity terms. Then, we propose a novel regularization technique based on these explanations that encourages models to learn from the context of group identifiers in addition to the identifiers themselves. Our approach improved over baselines in limiting false positives on out-of-domain data while maintaining and in cases improving in-domain performance.
Anthology ID:
2020.acl-main.483
Volume:
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics
Month:
July
Year:
2020
Address:
Online
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
5435–5442
Language:
URL:
https://aclanthology.org/2020.acl-main.483
DOI:
10.18653/v1/2020.acl-main.483
Bibkey:
Cite (ACL):
Brendan Kennedy, Xisen Jin, Aida Mostafazadeh Davani, Morteza Dehghani, and Xiang Ren. 2020. Contextualizing Hate Speech Classifiers with Post-hoc Explanation. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5435–5442, Online. Association for Computational Linguistics.
Cite (Informal):
Contextualizing Hate Speech Classifiers with Post-hoc Explanation (Kennedy et al., ACL 2020)
Copy Citation:
PDF:
https://preview.aclanthology.org/nodalida-main-page/2020.acl-main.483.pdf
Software:
 2020.acl-main.483.Software.zip
Video:
 http://slideslive.com/38929143
Code
 BrendanKennedy/contextualizing-hate-speech-models-with-explanations +  additional community code
Data
Hate Speech