A Context-Aware Contrastive Learning Framework for Hateful Meme Detection and Segmentation

Xuanyu Su, Yansong Li, Diana Inkpen, Nathalie Japkowicz


Abstract
Amidst the rise of Large Multimodal Models (LMMs) and their widespread application in generating and interpreting complex content, the risk of propagating biased and harmful memes remains significant. Current safety measures often fail to detect subtly integrated hateful content within “Confounder Memes”. To address this, we introduce HateSieve, a new framework designed to enhance the detection and segmentation of hateful elements in memes. HateSieve features a novel Contrastive Meme Generator that creates semantically correlated memes, a customized triplet dataset for contrastive learning, and an Image-Text Alignment module that produces context-aware embeddings for accurate meme segmentation. Empirical experiments show that HateSieve not only surpasses existing LMMs in performance with fewer trainable parameters but also offers a robust mechanism for precisely identifying and isolating hateful content. Caution: Contains academic discussions of hate speech; viewer discretion advised.
Anthology ID:
2025.findings-naacl.289
Volume:
Findings of the Association for Computational Linguistics: NAACL 2025
Month:
April
Year:
2025
Address:
Albuquerque, New Mexico
Editors:
Luis Chiruzzo, Alan Ritter, Lu Wang
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
5201–5215
Language:
URL:
https://preview.aclanthology.org/fix-sig-urls/2025.findings-naacl.289/
DOI:
Bibkey:
Cite (ACL):
Xuanyu Su, Yansong Li, Diana Inkpen, and Nathalie Japkowicz. 2025. A Context-Aware Contrastive Learning Framework for Hateful Meme Detection and Segmentation. In Findings of the Association for Computational Linguistics: NAACL 2025, pages 5201–5215, Albuquerque, New Mexico. Association for Computational Linguistics.
Cite (Informal):
A Context-Aware Contrastive Learning Framework for Hateful Meme Detection and Segmentation (Su et al., Findings 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/fix-sig-urls/2025.findings-naacl.289.pdf