Muhammad Abulaish


2019

pdf bib
An LSTM-Based Deep Learning Approach for Detecting Self-Deprecating Sarcasm in Textual Data
Ashraf Kamal | Muhammad Abulaish
Proceedings of the 16th International Conference on Natural Language Processing

Self-deprecating sarcasm is a special category of sarcasm, which is nowadays popular and useful for many real-life applications, such as brand endorsement, product campaign, digital marketing, and advertisement. The self-deprecating style of campaign and marketing strategy is mainly adopted to excel brand endorsement and product sales value. In this paper, we propose an LSTM-based deep learning approach for detecting self-deprecating sarcasm in textual data. To the best of our knowledge, there is no prior work related to self-deprecating sarcasm detection using deep learning techniques. Starting with a filtering step to identify self-referential tweets, the proposed approach adopts a deep learning model using LSTM for detecting self-deprecating sarcasm. The proposed approach is evaluated over three Twitter datasets and performs significantly better in terms of precision, recall, and f-score.

pdf bib
DRCoVe: An Augmented Word Representation Approach using Distributional and Relational Context
Md. Aslam Parwez | Muhammad Abulaish | Mohd Fazil
Proceedings of the 16th International Conference on Natural Language Processing

Word representation using the distributional information of words from a sizeable corpus is considered efficacious in many natural language processing and text mining applications. However, distributional representation of a word is unable to capture distant relational knowledge, representing the relational semantics. In this paper, we propose a novel word representation approach using distributional and relational contexts, DRCoVe, which augments the distributional representation of a word using the relational semantics extracted as syntactic and semantic association among entities from the underlying corpus. Unlike existing approaches that use external knowledge bases representing the relational semantics for enhanced word representation, DRCoVe uses typed dependencies (aka syntactic dependencies) to extract relational knowledge from the underlying corpus. The proposed approach is applied over a biomedical text corpus to learn word representation and compared with GloVe, which is one of the most popular word embedding approaches. The evaluation results on various benchmark datasets for word similarity and word categorization tasks demonstrate the effectiveness of DRCoVe over the GloVe.