Probing Critical Learning Dynamics of PLMs for Hate Speech Detection
Sarah Masud, Mohammad Aflah Khan, Vikram Goyal, Md Shad Akhtar, Tanmoy Chakraborty
Abstract
Despite the widespread adoption, there is a lack of research into how various critical aspects of pretrained language models (PLMs) affect their performance in hate speech detection. Through five research questions, our findings and recommendations lay the groundwork for empirically investigating different aspects of PLMs’ use in hate speech detection. We deep dive into comparing different pretrained models, evaluating their seed robustness, finetuning settings, and the impact of pretraining data collection time. Our analysis reveals early peaks for downstream tasks during pretraining, the limited benefit of employing a more recent pretraining corpus, and the significance of specific layers during finetuning. We further call into question the use of domain-specific models and highlight the need for dynamic datasets for benchmarking hate speech detection.- Anthology ID:
- 2024.findings-eacl.55
- Volume:
- Findings of the Association for Computational Linguistics: EACL 2024
- Month:
- March
- Year:
- 2024
- Address:
- St. Julian’s, Malta
- Editors:
- Yvette Graham, Matthew Purver
- Venue:
- Findings
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 826–845
- Language:
- URL:
- https://preview.aclanthology.org/build-pipeline-with-new-library/2024.findings-eacl.55/
- DOI:
- Cite (ACL):
- Sarah Masud, Mohammad Aflah Khan, Vikram Goyal, Md Shad Akhtar, and Tanmoy Chakraborty. 2024. Probing Critical Learning Dynamics of PLMs for Hate Speech Detection. In Findings of the Association for Computational Linguistics: EACL 2024, pages 826–845, St. Julian’s, Malta. Association for Computational Linguistics.
- Cite (Informal):
- Probing Critical Learning Dynamics of PLMs for Hate Speech Detection (Masud et al., Findings 2024)
- PDF:
- https://preview.aclanthology.org/build-pipeline-with-new-library/2024.findings-eacl.55.pdf