@inproceedings{zia-etal-2021-racist,
    title = "Racist or Sexist Meme? Classifying Memes beyond Hateful",
    author = "Zia, Haris Bin  and
      Castro, Ignacio  and
      Tyson, Gareth",
    editor = "Mostafazadeh Davani, Aida  and
      Kiela, Douwe  and
      Lambert, Mathias  and
      Vidgen, Bertie  and
      Prabhakaran, Vinodkumar  and
      Waseem, Zeerak",
    booktitle = "Proceedings of the 5th Workshop on Online Abuse and Harms (WOAH 2021)",
    month = aug,
    year = "2021",
    address = "Online",
    publisher = "Association for Computational Linguistics",
    url = "https://preview.aclanthology.org/ingest-emnlp/2021.woah-1.23/",
    doi = "10.18653/v1/2021.woah-1.23",
    pages = "215--219",
    abstract = "Memes are the combinations of text and images that are often humorous in nature. But, that may not always be the case, and certain combinations of texts and images may depict hate, referred to as hateful memes. This work presents a multimodal pipeline that takes both visual and textual features from memes into account to (1) identify the protected category (e.g. race, sex etc.) that has been attacked; and (2) detect the type of attack (e.g. contempt, slurs etc.). Our pipeline uses state-of-the-art pre-trained visual and textual representations, followed by a simple logistic regression classifier. We employ our pipeline on the Hateful Memes Challenge dataset with additional newly created fine-grained labels for protected category and type of attack. Our best model achieves an AUROC of 0.96 for identifying the protected category, and 0.97 for detecting the type of attack. We release our code at \url{https://github.com/harisbinzia/HatefulMemes}"
}Markdown (Informal)
[Racist or Sexist Meme? Classifying Memes beyond Hateful](https://preview.aclanthology.org/ingest-emnlp/2021.woah-1.23/) (Zia et al., WOAH 2021)
ACL