Rakshitha Rao Ailneni
Also published as: Ailneni Rakshitha Rao
2026
Exploration of How Hate Is Framed on Social Media
Rakshitha Rao Ailneni | Sanda Harabagiu
Proceedings of the Fifteenth Language Resources and Evaluation Conference
Rakshitha Rao Ailneni | Sanda Harabagiu
Proceedings of the Fifteenth Language Resources and Evaluation Conference
Understanding how hate is framed in multimodal social media content is crucial for developing interpretable and robust hate detection systems. We present the MM-HateFrames Dataset, a large-scale resource encoding 2,298 Hate Frames (HFs) and their corresponding rationales discovered from two benchmark datasets—Hateful Memes and MMHS150K—comprising over 11K+ social media multimodal posts. This allowed us to explore several generative and non-generative methods to automatically discover the way hate is framed when relying on MM-HateFrames, including clustering-based methods and large multimodal models (LMMs) under zero-shot and few-shot settings. Experimental evaluations show that few-shot LMMs prompting generates the most coherent and sound frame articulations. The MM-HateFrames Dataset provides a valuable foundation for future research in hate speech understanding, frame articulation, and explainable multimodal NLP, enabling models to interpret not only whether content is hateful but also how hate is conceptually framed.
The MISOMEM-Val Dataset for Identifying Human Values in Misogynistic Memes
Rakshitha Rao Ailneni | Sanda Harabagiu
Proceedings of the Fifteenth Language Resources and Evaluation Conference
Rakshitha Rao Ailneni | Sanda Harabagiu
Proceedings of the Fifteenth Language Resources and Evaluation Conference
We present MISOMEM-Val, the first dataset that systematically annotates human values across Frames of Misogyny (FoMs) derived from misogynistic memes. Extending the Taxonomy of Misogyny, each frame is linked to the Human Value Hierarchy (HVH) with annotated support and ignore stances and accompanying rationales. In total, 1089 frames were annotated, comprising 3,051 support and 7,007 ignore value instances. We introduce Hierarchical Value Discovery with Human Feedback (HVD-HF), an LLM-assisted annotation framework combining Chain-of-Thought prompting and self-consistency verification to ensure transparency and quality. The annotation analysis reveals systematic asymmetries—Conservation and Self-Enhancement are frequently supported, while Self-Transcendence is often ignored, thus highlighting how misogynistic memes distort core human values.
2025
Automatically Discovering How Misogyny is Framed on Social Media
Rakshitha Rao Ailneni | Sanda M. Harabagiu
Proceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)
Rakshitha Rao Ailneni | Sanda M. Harabagiu
Proceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)
Misogyny, which is widespread on social media, can be identified not only by recognizing its many forms but also by discovering how misogyny is framed. This paper considers the automatic discovery of misogyny problems and their frames through the Dis-MP&F method, which enables the generation of a data-driven, rich Taxonomy of Misogyny (ToM), offering new insights in the complexity of expressions of misogyny. Furthermore, the Dis-MP&F method, informed by the ToM, is capable of producing very promising results on a misogyny benchmark dataset.
2022
ASRtrans at SemEval-2022 Task 5: Transformer-based Models for Meme Classification
Ailneni Rakshitha Rao | Arjun Rao
Proceedings of the 16th International Workshop on Semantic Evaluation (SemEval-2022)
Ailneni Rakshitha Rao | Arjun Rao
Proceedings of the 16th International Workshop on Semantic Evaluation (SemEval-2022)
Women are frequently targeted online with hate speech and misogyny using tweets, memes, and other forms of communication. This paper describes our system for Task 5 of SemEval-2022: Multimedia Automatic Misogyny Identification (MAMI). We participated in both the sub-tasks, where we used transformer-based architecture to combine features of images and text. We explore models with multi-modal pre-training (VisualBERT) and text-based pre-training (MMBT) while drawing comparative results. We also show how additional training with task-related external data can improve the model performance. We achieved sizable improvements over baseline models and the official evaluation ranked our system 3rd out of 83 teams on the binary classification task (Sub-task A) with an F1 score of 0.761, and 7th out of 48 teams on the multi-label classification task (Sub-task B) with an F1 score of 0.705.
ASRtrans at SemEval-2022 Task 4: Ensemble of Tuned Transformer-based Models for PCL Detection
Ailneni Rakshitha Rao
Proceedings of the 16th International Workshop on Semantic Evaluation (SemEval-2022)
Ailneni Rakshitha Rao
Proceedings of the 16th International Workshop on Semantic Evaluation (SemEval-2022)
Patronizing behavior is a subtle form of bullying and when directed towards vulnerable communities, it can arise inequalities. This paper describes our system for Task 4 of SemEval-2022: Patronizing and Condescending Language Detection (PCL). We participated in both the sub-tasks and conducted extensive experiments to analyze the effects of data augmentation and loss functions used, to tackle the problem of class imbalance. We explore whether large transformer-based models can capture the intricacies associated with PCL detection. Our solution consists of an ensemble of the RoBERTa model which is further trained on external data and other language models such as XLNeT, Ernie-2.0, and BERT. We also present the results of several problem transformation techniques such as Classifier Chains, Label Powerset, and Binary relevance for multi-label classification.