Abstract
Memes are the combinations of text and images that are often humorous in nature. But, that may not always be the case, and certain combinations of texts and images may depict hate, referred to as hateful memes. This work presents a multimodal pipeline that takes both visual and textual features from memes into account to (1) identify the protected category (e.g. race, sex etc.) that has been attacked; and (2) detect the type of attack (e.g. contempt, slurs etc.). Our pipeline uses state-of-the-art pre-trained visual and textual representations, followed by a simple logistic regression classifier. We employ our pipeline on the Hateful Memes Challenge dataset with additional newly created fine-grained labels for protected category and type of attack. Our best model achieves an AUROC of 0.96 for identifying the protected category, and 0.97 for detecting the type of attack. We release our code at https://github.com/harisbinzia/HatefulMemes- Anthology ID:
- 2021.woah-1.23
- Volume:
- Proceedings of the 5th Workshop on Online Abuse and Harms (WOAH 2021)
- Month:
- August
- Year:
- 2021
- Address:
- Online
- Editors:
- Aida Mostafazadeh Davani, Douwe Kiela, Mathias Lambert, Bertie Vidgen, Vinodkumar Prabhakaran, Zeerak Waseem
- Venue:
- WOAH
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 215–219
- Language:
- URL:
- https://aclanthology.org/2021.woah-1.23
- DOI:
- 10.18653/v1/2021.woah-1.23
- Cite (ACL):
- Haris Bin Zia, Ignacio Castro, and Gareth Tyson. 2021. Racist or Sexist Meme? Classifying Memes beyond Hateful. In Proceedings of the 5th Workshop on Online Abuse and Harms (WOAH 2021), pages 215–219, Online. Association for Computational Linguistics.
- Cite (Informal):
- Racist or Sexist Meme? Classifying Memes beyond Hateful (Zia et al., WOAH 2021)
- PDF:
- https://preview.aclanthology.org/nschneid-patch-1/2021.woah-1.23.pdf
- Code
- harisbinzia/hatefulmemes
- Data
- Hateful Memes, Hateful Memes Challenge