2022
pdf
abs
IISERB Brains at SemEval-2022 Task 6: A Deep-learning Framework to Identify Intended Sarcasm in English
Tanuj Shekhawat
|
Manoj Kumar
|
Udaybhan Rathore
|
Aditya Joshi
|
Jasabanta Patro
Proceedings of the 16th International Workshop on Semantic Evaluation (SemEval-2022)
This paper describes the system architectures and the models submitted by our team “IISERB Brains” to SemEval 2022 Task 6 competition. We contested for all three sub-tasks floated for the English dataset. On the leader-board, we got 19th rank out of 43 teams for sub-task A, 8th rank out of 22 teams for sub-task B, and 13th rank out of 16 teams for sub-task C. Apart from the submitted results and models, we also report the other models and results that we obtained through our experiments after organizers published the gold labels of their evaluation data. All of our code and links to additional resources are present in GitHub for reproducibility.
2021
pdf
abs
A Simple Three-Step Approach for the Automatic Detection of Exaggerated Statements in Health Science News
Jasabanta Patro
|
Sabyasachee Baruah
Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume
There is a huge difference between a scientific journal reporting ‘wine consumption might be correlated to cancer’, and a media outlet publishing ‘wine causes cancer’ citing the journal’s results. The above example is a typical case of a scientific statement being exaggerated as an outcome of the rising problem of media manipulation. Given a pair of statements (say one from the source journal article and the other from the news article covering the results published in the journal), is it possible to ascertain with some confidence whether one is an exaggerated version of the other? This paper presents a surprisingly simple yet rational three-step approach that performs best for this task. We solve the task by breaking it into three sub-tasks as follows – (a) given a statement from a scientific paper or press release, we first extract relation phrases (e.g., ‘causes’ versus ‘might be correlated to’) connecting the dependent (e.g., ‘cancer’) and the independent (‘wine’) variable, (b) classify the strength of the relationship phrase extracted and (c) compare the strengths of the relation phrases extracted from the statements to identify whether one statement contains an exaggerated version of the other, and to what extent. Through rigorous experiments, we demonstrate that our simple approach by far outperforms baseline models that compare state-of-the-art embedding of the statement pairs through a binary classifier or recast the problem as a textual entailment task, which appears to be a very natural choice in this settings.
2020
pdf
abs
Code-Switching Patterns Can Be an Effective Route to Improve Performance of Downstream NLP Applications: A Case Study of Humour, Sarcasm and Hate Speech Detection
Srijan Bansal
|
Vishal Garimella
|
Ayush Suhane
|
Jasabanta Patro
|
Animesh Mukherjee
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics
In this paper, we demonstrate how code-switching patterns can be utilised to improve various downstream NLP applications. In particular, we encode various switching features to improve humour, sarcasm and hate speech detection tasks. We believe that this simple linguistic observation can also be potentially helpful in improving other similar NLP applications.
2019
pdf
abs
KGPChamps at SemEval-2019 Task 3: A deep learning approach to detect emotions in the dialog utterances.
Jasabanta Patro
|
Nitin Choudhary
|
Kalpit Chittora
|
Animesh Mukherjee
Proceedings of the 13th International Workshop on Semantic Evaluation
This paper describes our approach to solve Semeval task 3: EmoContext; where, given a textual dialogue i.e. a user utterance along with two turns of context, we have to classify the emotion associated with the utterance as one of the following emotion classes: Happy, Sad, Angry or Others. To solve this problem, we experiment with different deep learning models ranging from simple bidirectional LSTM (Long and short term memory) model to comparatively complex attention model. We also experiment with word embedding conceptnet along with word embedding generated from bi-directional LSTM taking input characters. We fine-tune different parameters and hyper-parameters associated with each of our models and report the value of evaluating measure i.e. micro precision along with class wise precision, recall and F1-score of each system. We report the bidirectional LSTM model, along with the input word embedding as the concatenation of word embedding generated from bidirectional LSTM for word characters and conceptnet embedding, as the best performing model with a highest micro-F1 score of 0.7261. We also report class wise precision, recall, and f1-score of best performing model along with other models that we have experimented with.
pdf
abs
A deep-learning framework to detect sarcasm targets
Jasabanta Patro
|
Srijan Bansal
|
Animesh Mukherjee
Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)
In this paper we propose a deep learning framework for sarcasm target detection in predefined sarcastic texts. Identification of sarcasm targets can help in many core natural language processing tasks such as aspect based sentiment analysis, opinion mining etc. To begin with, we perform an empirical study of the socio-linguistic features and identify those that are statistically significant in indicating sarcasm targets (p-values in the range(0.05,0.001)). Finally, we present a deep-learning framework augmented with socio-linguistic features to detect sarcasm targets in sarcastic book-snippets and tweets. We achieve a huge improvement in the performance in terms of exact match and dice scores compared to the current state-of-the-art baseline.
2017
pdf
abs
All that is English may be Hindi: Enhancing language identification through automatic ranking of the likeliness of word borrowing in social media
Jasabanta Patro
|
Bidisha Samanta
|
Saurabh Singh
|
Abhipsa Basu
|
Prithwish Mukherjee
|
Monojit Choudhury
|
Animesh Mukherjee
Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing
n this paper, we present a set of computational methods to identify the likeliness of a word being borrowed, based on the signals from social media. In terms of Spearman’s correlation values, our methods perform more than two times better (∼ 0.62) in predicting the borrowing likeliness compared to the best performing baseline (∼ 0.26) reported in literature. Based on this likeliness estimate we asked annotators to re-annotate the language tags of foreign words in predominantly native contexts. In 88% of cases the annotators felt that the foreign language tag should be replaced by native language tag, thus indicating a huge scope for improvement of automatic language identification systems.