Ashwani Bhat
2022
COGMEN: COntextualized GNN based Multimodal Emotion recognitioN
Abhinav Joshi
|
Ashwani Bhat
|
Ayush Jain
|
Atin Singh
|
Ashutosh Modi
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
Emotions are an inherent part of human interactions, and consequently, it is imperative to develop AI systems that understand and recognize human emotions. During a conversation involving various people, a person’s emotions are influenced by the other speaker’s utterances and their own emotional state over the utterances. In this paper, we propose COntextualized Graph Neural Network based Multi- modal Emotion recognitioN (COGMEN) system that leverages local information (i.e., inter/intra dependency between speakers) and global information (context). The proposed model uses Graph Neural Network (GNN) based architecture to model the complex dependencies (local and global information) in a conversation. Our model gives state-of-the- art (SOTA) results on IEMOCAP and MOSEI datasets, and detailed ablation experiments show the importance of modeling information at both levels.
CISLR: Corpus for Indian Sign Language Recognition
Abhinav Joshi
|
Ashwani Bhat
|
Pradeep S
|
Priya Gole
|
Shashwat Gupta
|
Shreyansh Agarwal
|
Ashutosh Modi
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
Indian Sign Language, though used by a diverse community, still lacks well-annotated resources for developing systems that would enable sign language processing. In recent years researchers have actively worked for sign languages like American Sign Languages, however, Indian Sign language is still far from data-driven tasks like machine translation. To address this gap, in this paper, we introduce a new dataset CISLR (Corpus for Indian Sign Language Recognition) for word-level recognition in Indian Sign Language using videos. The corpus has a large vocabulary of around 4700 words covering different topics and domains. Further, we propose a baseline model for word recognition from sign language videos. To handle the low resource problem in the Indian Sign Language, the proposed model consists of a prototype-based one-shot learner that leverages resource rich American Sign Language to learn generalized features for improving predictions in Indian Sign Language. Our experiments show that gesture features learned in another sign language can help perform one-shot predictions in CISLR.
2021
Adv-OLM: Generating Textual Adversaries via OLM
Vijit Malik
|
Ashwani Bhat
|
Ashutosh Modi
Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume
Deep learning models are susceptible to adversarial examples that have imperceptible perturbations in the original input, resulting in adversarial attacks against these models. Analysis of these attacks on the state of the art transformers in NLP can help improve the robustness of these models against such adversarial inputs. In this paper, we present Adv-OLM, a black-box attack method that adapts the idea of Occlusion and Language Models (OLM) to the current state of the art attack methods. OLM is used to rank words of a sentence, which are later substituted using word replacement strategies. We experimentally show that our approach outperforms other attack methods for several text classification tasks.
Search
Co-authors
- Ashutosh Modi 3
- Abhinav Joshi 2
- Ayush Jain 1
- Atin Singh 1
- Vijit Malik 1
- show all...