Venkatesh Velugubantla


2025

pdf bib
F2 (FutureFiction): Detection of Fake News on Futuristic Technology
Msvpj Sathvik | Venkatesh Velugubantla | Ravi Teja Potla
Proceedings of the Fifth Workshop on Speech, Vision, and Language Technologies for Dravidian Languages

There is widespread of misinformation on futuristic technology and society. To accurately detect such news, the algorithms require up-to-date knowledge. The Large Language Models excel in the NLP but cannot retrieve the ongoing events or innovations. For example, GPT and it’s variants are restricted till the knowledge of 2021. We introduce a new methodology for the identification of fake news pertaining to futuristic technology and society. Leveraging the power of Google Knowledge, we enhance the capabilities of the GPT-3.5 language model, thereby elevating its performance in the detection of misinformation. The proposed framework exhibits superior efficacy compared to established baselines with the accuracy of 81.04%. Moreover, we propose a novel dataset consisting of fake news in three languages English, Telugu and Tenglish of around 21000 from various sources.

pdf bib
HateImgPrompts: Mitigating Generation of Images Spreading Hate Speech
Vineet Kumar Khullar | Venkatesh Velugubantla | Bhanu Prakash Reddy Rella | Mohan Krishna Mannava | Msvpj Sathvik
Proceedings of the 5th International Conference on Natural Language Processing for Digital Humanities

The emergence of artificial intelligence has proven beneficial to numerous organizations, particularly in its various applications for social welfare. One notable application lies in AI-driven image generation tools. These tools produce images based on provided prompts. While this technology holds potential for constructive use, it also carries the risk of being exploited for malicious purposes, such as propagating hate. To address this we propose a novel dataset “HateImgPrompts”. We have benchmarked the dataset with the latest models including GPT-3.5, LLAMA 2, etc. The dataset consists of 9467 prompts and the accuracy of the classifier after finetuning of the dataset is around 81%.

pdf bib
AI Tools Can Generate Misculture Visuals! Detecting Prompts Generating Misculture Visuals For Prevention
Venkatesh Velugubantla | Raj Sonani | Msvpj Sathvik
Proceedings of the Fourth Workshop on NLP for Positive Impact (NLP4PI)

Advanced AI models that generate realistic images from text prompts offer new creative possibilities but also risk producing culturally insensitive or offensive content. To address this issue, we introduce a novel dataset designed to classify text prompts that could lead to the generation of harmful images misrepresenting different cultures and communities. By training machine learning models on this dataset, we aim to automatically identify and filter out harmful prompts before image generation, balancing cultural sensitivity with creative freedom. Benchmarking with state-ofthe-art language models, our baseline models achieved an accuracy of 73.34%.