2020
pdf
abs
JUSTMasters at SemEval-2020 Task 3: Multilingual Deep Learning Model to Predict the Effect of Context in Word Similarity
Nour Al-khdour
|
Mutaz Bni Younes
|
Malak Abdullah
|
Mohammad AL-Smadi
Proceedings of the Fourteenth Workshop on Semantic Evaluation
There is a growing research interest in studying word similarity. Without a doubt, two similar words in a context may considered different in another context. Therefore, this paper investigates the effect of the context in word similarity. The SemEval-2020 workshop has provided a shared task (Task 3: Predicting the (Graded) Effect of Context in Word Similarity). In this task, the organizers provided unlabeled datasets for four languages, English, Croatian, Finnish and Slovenian. Our team, JUSTMasters, has participated in this competition in the two subtasks: A and B. Our approach has used a weighted average ensembling method for different pretrained embeddings techniques for each of the four languages. Our proposed model outperformed the baseline models in both subtasks and acheived the best result for subtask 2 in English and Finnish, with score 0.725 and 0.68 respectively. We have been ranked the sixth for subtask 1, with scores for English, Croatian, Finnish, and Slovenian as follows: 0.738, 0.44, 0.546, 0.512.
pdf
abs
HR@JUST Team at SemEval-2020 Task 4: The Impact of RoBERTa Transformer for Evaluation Common Sense Understanding
Heba Al-Jarrah
|
Rahaf Al-Hamouri
|
Mohammad AL-Smadi
Proceedings of the Fourteenth Workshop on Semantic Evaluation
This paper describes the results of our team HR@JUST participation at SemEval-2020 Task 4 - Commonsense Validation and Explanation (ComVE) for POST evaluation period. The provided task consists of three sub-tasks, we participate in task A. We considered a state-of-the-art approach for solving this task by performing RoBERTa model with no Next Sentences Prediction (NSP), dynamic masking, larger training data, and larger batch size. The achieved results show that we got the 11th rank on the final test set leaderboard with an accuracy of 91.3%.
pdf
abs
NLP@JUST at SemEval-2020 Task 4: Ensemble Technique for BERT and Roberta to Evaluate Commonsense Validation
Emran Al-Bashabsheh
|
Ayah Abu Aqouleh
|
Mohammad AL-Smadi
Proceedings of the Fourteenth Workshop on Semantic Evaluation
This paper presents the work of the NLP@JUST team at SemEval-2020 Task 4 competition that related to commonsense validation and explanation (ComVE) task. The team participates in sub-taskA (Validation) which related to validation that checks if the text is against common sense or not. Several models have trained (i.e. Bert, XLNet, and Roberta), however, the main models used are the RoBERTa-large and BERT Whole word masking. As well as, we utilized the results from both models to generate final prediction by using the average Ensemble technique, that used to improve the overall performance. The evaluation result shows that the implemented model achieved an accuracy of 93.9% obtained and published at the post-evaluation result on the leaderboard.
pdf
abs
KEIS@JUST at SemEval-2020 Task 12: Identifying Multilingual Offensive Tweets Using Weighted Ensemble and Fine-Tuned BERT
Saja Tawalbeh
|
Mahmoud Hammad
|
Mohammad AL-Smadi
Proceedings of the Fourteenth Workshop on Semantic Evaluation
This research presents our team KEIS@JUST participation at SemEval-2020 Task 12 which represents shared task on multilingual offensive language. We participated in all the provided languages for all subtasks except sub-task-A for the English language. Two main approaches have been developed the first is performed to tackle both languages Arabic and English, a weighted ensemble consists of Bi-GRU and CNN followed by Gaussian noise and global pooling layer multiplied by weights to improve the overall performance. The second is performed for other languages, a transfer learning from BERT beside the recurrent neural networks such as Bi-LSTM and Bi-GRU followed by a global average pooling layer. Word embedding and contextual embedding have been used as features, moreover, data augmentation has been used only for the Arabic language.
pdf
abs
SAJA at TRAC 2020 Shared Task: Transfer Learning for Aggressive Identification with XGBoost
Saja Tawalbeh
|
Mahmoud Hammad
|
Mohammad AL-Smadi
Proceedings of the Second Workshop on Trolling, Aggression and Cyberbullying
we have developed a system based on transfer learning technique depending on universal sentence encoder (USE) embedding that will be trained in our developed model using xgboost classifier to identify the aggressive text data from English content. A reference dataset has been provided from TRAC 2020 to evaluate the developed approach. The developed approach achieved in sub-task EN-A 60.75% F1 (weighted) which ranked fourteenth out of sixteen teams and achieved 85.66% F1 (weighted) in sub-task EN-B which ranked six out of fifteen teams.
pdf
abs
Team Alexa at NADI Shared Task
Mutaz Younes
|
Nour Al-khdour
|
Mohammad AL-Smadi
Proceedings of the Fifth Arabic Natural Language Processing Workshop
In this paper, we discuss our team’s work on the NADI Shared Task. The task requires classifying Arabic tweets among 21 dialects. We tested out different approaches, and the best one was the simplest one. Our best submission was using Multinational Naive Bayes (MNB) classifier (Small and Hsiao, 1985) with n-grams as features. Despite its simplicity, this classifier shows better results than complicated models such as BERT. Our best submitted score was 17% F1-score and 35% accuracy.
2019
pdf
abs
Team JUST at the MADAR Shared Task on Arabic Fine-Grained Dialect Identification
Bashar Talafha
|
Ali Fadel
|
Mahmoud Al-Ayyoub
|
Yaser Jararweh
|
Mohammad AL-Smadi
|
Patrick Juola
Proceedings of the Fourth Arabic Natural Language Processing Workshop
In this paper, we describe our team’s effort on the MADAR Shared Task on Arabic Fine-Grained Dialect Identification. The task requires building a system capable of differentiating between 25 different Arabic dialects in addition to MSA. Our approach is simple. After preprocessing the data, we use Data Augmentation (DA) to enlarge the training data six times. We then build a language model and extract n-gram word-level and character-level TF-IDF features and feed them into an MNB classifier. Despite its simplicity, the resulting model performs really well producing the 4th highest F-measure and region-level accuracy and the 5th highest precision, recall, city-level accuracy and country-level accuracy among the participating teams.
2016
pdf
bib
SemEval-2016 Task 5: Aspect Based Sentiment Analysis
Maria Pontiki
|
Dimitris Galanis
|
Haris Papageorgiou
|
Ion Androutsopoulos
|
Suresh Manandhar
|
Mohammad AL-Smadi
|
Mahmoud Al-Ayyoub
|
Yanyan Zhao
|
Bing Qin
|
Orphée De Clercq
|
Véronique Hoste
|
Marianna Apidianaki
|
Xavier Tannier
|
Natalia Loukachevitch
|
Evgeniy Kotelnikov
|
Nuria Bel
|
Salud María Jiménez-Zafra
|
Gülşen Eryiğit
Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016)