Yogesh Kumar
2026
Explainable AI for Ethical Counter Speech Generation in Hate Speech Mitigation
Ashiful Islam Ridoy | Mohammed Faisal | Yogesh Kumar | Md Mamun-Ur Rashid | Marina Ernst | Frank Hopfgartner
Proceedings of the Fifteenth Language Resources and Evaluation Conference
Ashiful Islam Ridoy | Mohammed Faisal | Yogesh Kumar | Md Mamun-Ur Rashid | Marina Ernst | Frank Hopfgartner
Proceedings of the Fifteenth Language Resources and Evaluation Conference
The proliferation of hate speech in digital communication platforms poses significant challenges to online safety and social cohesion. While automated hate speech detection systems have shown promise, their black-box nature limits user trust and understanding of AI-driven content moderation decisions. This paper presents a framework that integrates explainable AI (XAI) techniques with counter-speech generation to create transparent, ethical solutions for hate speech mitigation. Our approach combines a fine-tuned HateBERT model, with a specialized Llama 3.1-8B-Instruct model for generating empathetic counter-narratives. The system employs five distinct XAI methods: Integrated Gradients, Attention Visualization, LIME, Counterfactual Analysis, and Natural Language Explanations to provide interpretable reasoning behind both detection and response generation decisions. The integration of explainability mechanisms with counter-speech generation represents a novel contribution to ethical AI systems, fostering transparency and trust in automated hate speech mitigation while maintaining high performance standards for real-world deployment.
2025
Language-Guided Temporal Token Pruning for Efficient VideoLLM Processing
Yogesh Kumar
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Yogesh Kumar
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Vision Language Models (VLMs) struggle with long-form videos due to the quadratic complexity of attention mechanisms. We propose Language-Guided Temporal Token Pruning (LGTTP), which leverages temporal cues from queries to adaptively prune video tokens, preserving contextual continuity while reducing computational overhead. Unlike uniform pruning or keyframe selection, LGTTP retains higher token density in temporally relevant segments. Our model-agnostic framework integrates with TimeChat and LLaVA-Video, achieving a 65% reduction in computation while preserving 97-99% of the original performance. On QVHighlights, LGTTP improves HIT@1 by +9.5%, and on Charades-STA, it retains 99.6% of R@1. It excels on queries with explicit temporal markers and remains effective across general video understanding tasks.