Adithya Avvaru


2020

pdf bib
BERT at SemEval-2020 Task 8: Using BERT to Analyse Meme Emotions
Adithya Avvaru | Sanath Vobilisetty
Proceedings of the Fourteenth Workshop on Semantic Evaluation

Sentiment analysis, being one of the most sought after research problems within Natural Language Processing (NLP) researchers. The range of problems being addressed by sentiment analysis is increasing. Till now, most of the research focuses on predicting sentiment, or sentiment categories like sarcasm, humor, offense and motivation on text data. But, there is very limited research that is focusing on predicting or analyzing the sentiment of internet memes. We try to address this problem as part of “Task 8 of SemEval 2020: Memotion Analysis”. We have participated in all the three tasks under Memotion Analysis. Our system built using state-of-the-art Transformer-based pre-trained Bidirectional Encoder Representations from Transformers (BERT) performed better compared to baseline models for the two tasks A and C and performed close to the baseline model for task B. In this paper, we present the data used, steps used by us for data cleaning and preparation, the fine-tuning process for BERT based model and finally predict the sentiment or sentiment categories. We found that the sequence models like Long Short Term Memory(LSTM) and its variants performed below par in predicting the sentiments. We also performed a comparative analysis with other Transformer based models like DistilBERT and XLNet.

pdf bib
Detecting Sarcasm in Conversation Context Using Transformer-Based Models
Adithya Avvaru | Sanath Vobilisetty | Radhika Mamidi
Proceedings of the Second Workshop on Figurative Language Processing

Sarcasm detection, regarded as one of the sub-problems of sentiment analysis, is a very typical task because the introduction of sarcastic words can flip the sentiment of the sentence itself. To date, many research works revolve around detecting sarcasm in one single sentence and there is very limited research to detect sarcasm resulting from multiple sentences. Current models used Long Short Term Memory (LSTM) variants with or without attention to detect sarcasm in conversations. We showed that the models using state-of-the-art Bidirectional Encoder Representations from Transformers (BERT), to capture syntactic and semantic information across conversation sentences, performed better than the current models. Based on the data analysis, we estimated that the number of sentences in the conversation that can contribute to the sarcasm and the results agrees to this estimation. We also perform a comparative study of our different versions of BERT-based model with other variants of LSTM model and XLNet (both using the estimated number of conversation sentences) and find out that BERT-based models outperformed them.

2019

pdf bib
CodeForTheChange at SemEval-2019 Task 8: Skip-Thoughts for Fact Checking in Community Question Answering
Adithya Avvaru | Anupam Pandey
Proceedings of the 13th International Workshop on Semantic Evaluation

The strengths of the scalable gradient tree boosting algorithm, XGBoost and distributed sentence encoder, Skip-Thought Vectors are not explored yet by the cQA research community. We tried to apply and combine these two effective methods for finding factual nature of the questions and answers. The work also include experimentation with other popular classifier models like AdaBoost Classifier, DecisionTree Classifier, RandomForest Classifier, ExtraTrees Classifier, XGBoost Classifier and Multi-layer Neural Network. In this paper, we present the features used, approaches followed for feature engineering, models experimented with and finally the results.

2018

pdf bib
Affect in Tweets using Experts Model
Subba Reddy Oota | Adithya Avvaru | Mounika Reddy Marreddy | Radhika Mamidi
Proceedings of the 32nd Pacific Asia Conference on Language, Information and Computation