Sanda Harabagiu
Other people with similar names: Sanda Harabagiu
2026
The MISOMEM-Val Dataset for Identifying Human Values in Misogynistic Memes
Rakshitha Rao Ailneni | Sanda Harabagiu
Proceedings of the Fifteenth Language Resources and Evaluation Conference
Rakshitha Rao Ailneni | Sanda Harabagiu
Proceedings of the Fifteenth Language Resources and Evaluation Conference
We present MISOMEM-Val, the first dataset that systematically annotates human values across Frames of Misogyny (FoMs) derived from misogynistic memes. Extending the Taxonomy of Misogyny, each frame is linked to the Human Value Hierarchy (HVH) with annotated support and ignore stances and accompanying rationales. In total, 1089 frames were annotated, comprising 3,051 support and 7,007 ignore value instances. We introduce Hierarchical Value Discovery with Human Feedback (HVD-HF), an LLM-assisted annotation framework combining Chain-of-Thought prompting and self-consistency verification to ensure transparency and quality. The annotation analysis reveals systematic asymmetries—Conservation and Self-Enhancement are frequently supported, while Self-Transcendence is often ignored, thus highlighting how misogynistic memes distort core human values.
Exploration of How Hate Is Framed on Social Media
Rakshitha Rao Ailneni | Sanda Harabagiu
Proceedings of the Fifteenth Language Resources and Evaluation Conference
Rakshitha Rao Ailneni | Sanda Harabagiu
Proceedings of the Fifteenth Language Resources and Evaluation Conference
Understanding how hate is framed in multimodal social media content is crucial for developing interpretable and robust hate detection systems. We present the MM-HateFrames Dataset, a large-scale resource encoding 2,298 Hate Frames (HFs) and their corresponding rationales discovered from two benchmark datasets—Hateful Memes and MMHS150K—comprising over 11K+ social media multimodal posts. This allowed us to explore several generative and non-generative methods to automatically discover the way hate is framed when relying on MM-HateFrames, including clustering-based methods and large multimodal models (LMMs) under zero-shot and few-shot settings. Experimental evaluations show that few-shot LMMs prompting generates the most coherent and sound frame articulations. The MM-HateFrames Dataset provides a valuable foundation for future research in hate speech understanding, frame articulation, and explainable multimodal NLP, enabling models to interpret not only whether content is hateful but also how hate is conceptually framed.