MemeDetoxNet: Balancing Toxicity Reduction and Context Preservation

Gitanjali Kumari, Jitendra Solanki, Asif Ekbal


Abstract
Toxic memes often spread harmful and offensive content and pose a significant challenge in online environments. In this paper, we present MemeDetoxNet, a robust framework designed to mitigate toxicity in memes by leveraging fine-tuned pre-trained models. Our approach utilizes the interpretability of CLIP (Contrastive Language-Image Pre-Training) to identify toxic elements within the visual and textual components of memes. Our objective is to automatically assess the immorality of toxic memes and transform them into morally acceptable alternatives by employing large language models (LLMs) to replace offensive text and blurring toxic regions in the image. As a result, we proposed MemeDetoxNet that has three main primitives: (1) detection of toxic memes, (2) localizing and highlighting toxic visual and textual attributes, and (3) manipulating the toxic content to create a morally acceptable alternative. Empirical evaluation on several publicly available meme datasets shows a reduction in toxicity by approximately 10-20%. Both qualitative and quantitative analyses further demonstrate MemeDetoxNet’s superior performance in detoxifying memes compared to the other methods. These results underscore MemeDetoxNet’s potential as an effective tool for content moderation on online platforms.
Anthology ID:
2025.findings-acl.1286
Volume:
Findings of the Association for Computational Linguistics: ACL 2025
Month:
July
Year:
2025
Address:
Vienna, Austria
Editors:
Wanxiang Che, Joyce Nabende, Ekaterina Shutova, Mohammad Taher Pilehvar
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
25076–25098
Language:
URL:
https://preview.aclanthology.org/transition-to-people-yaml/2025.findings-acl.1286/
DOI:
10.18653/v1/2025.findings-acl.1286
Bibkey:
Cite (ACL):
Gitanjali Kumari, Jitendra Solanki, and Asif Ekbal. 2025. MemeDetoxNet: Balancing Toxicity Reduction and Context Preservation. In Findings of the Association for Computational Linguistics: ACL 2025, pages 25076–25098, Vienna, Austria. Association for Computational Linguistics.
Cite (Informal):
MemeDetoxNet: Balancing Toxicity Reduction and Context Preservation (Kumari et al., Findings 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/transition-to-people-yaml/2025.findings-acl.1286.pdf