Abstract
Hate speech classifiers exhibit substantial performance degradation when evaluated on datasets different from the source. This is due to learning spurious correlations between words that are not necessarily relevant to hateful language, and hate speech labels from the training corpus. Previous work has attempted to mitigate this problem by regularizing specific terms from pre-defined static dictionaries. While this has been demonstrated to improve the generalizability of classifiers, the coverage of such methods is limited and the dictionaries require regular manual updates from human experts. In this paper, we propose to automatically identify and reduce spurious correlations using attribution methods with dynamic refinement of the list of terms that need to be regularized during training. Our approach is flexible and improves the cross-corpora performance over previous work independently and in combination with pre-defined dictionaries.- Anthology ID:
- 2022.findings-acl.32
- Volume:
- Findings of the Association for Computational Linguistics: ACL 2022
- Month:
- May
- Year:
- 2022
- Address:
- Dublin, Ireland
- Editors:
- Smaranda Muresan, Preslav Nakov, Aline Villavicencio
- Venue:
- Findings
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 372–382
- Language:
- URL:
- https://aclanthology.org/2022.findings-acl.32
- DOI:
- 10.18653/v1/2022.findings-acl.32
- Cite (ACL):
- Tulika Bose, Nikolaos Aletras, Irina Illina, and Dominique Fohr. 2022. Dynamically Refined Regularization for Improving Cross-corpora Hate Speech Detection. In Findings of the Association for Computational Linguistics: ACL 2022, pages 372–382, Dublin, Ireland. Association for Computational Linguistics.
- Cite (Informal):
- Dynamically Refined Regularization for Improving Cross-corpora Hate Speech Detection (Bose et al., Findings 2022)
- PDF:
- https://preview.aclanthology.org/ingest-2024-clasp/2022.findings-acl.32.pdf
- Code
- tbose20/d-ref
- Data
- HatEval