Dalya Faraj


2021

pdf
SarcasmDet at SemEval-2021 Task 7: Detect Humor and Offensive based on Demographic Factors using RoBERTa Pre-trained Model
Dalya Faraj | Malak Abdullah
Proceedings of the 15th International Workshop on Semantic Evaluation (SemEval-2021)

This paper presents one of the top winning solution systems for task 7 at SemEval2021, HaHackathon: Detecting and Rating Humor and Offense. This competition is divided into two tasks, task1 with three sub-tasks 1a,1b, and 1c, and task2. The goal for task1 is to predict if the text would be considered humorous or not, and if it is yes, then predict how humorous it is and whether the humor rating would be perceived as controversial. The goal of the task2 is to predict how the text is considered offensive for users in general. Our solution has been developed using RoBERTa pre-trained model with ensemble techniques. The paper describes the submitted solution system’s architecture with the experiments and the hyperparameter tuning that led to this robust system. Our model ranked third and fourth places out of 50 teams in tasks 1c and 1a with F1-Score of 0.6270 and 0.9675, respectively. At the same time, the model ranked one of the top 10 models in task 1b and task 2 with an RMSE scores of 0.5446 and 0.4469, respectively.

pdf
SarcasmDet at Sarcasm Detection Task 2021 in Arabic using AraBERT Pretrained Model
Dalya Faraj | Dalya Faraj | Malak Abdullah
Proceedings of the Sixth Arabic Natural Language Processing Workshop

This paper presents one of the top five winning solutions for the Shared Task on Sarcasm and Sentiment Detection in Arabic (Subtask-1 Sarcasm Detection). The goal of the task is to identify whether a tweet is sarcastic or not. Our solution has been developed using ensemble technique with AraBERT pre-trained model. We describe the architecture of the submitted solution in the shared task. We also provide the experiments and the hyperparameter tuning that lead to this result. Besides, we discuss and analyze the results by comparing all the models that we trained or tested to achieve a better score in a table design. Our model is ranked fifth out of 27 teams with an F1 score of 0.5985. It is worth mentioning that our model achieved the highest accuracy score of 0.7830

pdf
SarcasmDet at Sarcasm Detection Task 2021 in Arabic using AraBERT Pretrained Model
Dalya Faraj | Dalya Faraj | Malak Abdullah
Proceedings of the Sixth Arabic Natural Language Processing Workshop

This paper presents one of the top five winning solutions for the Shared Task on Sarcasm and Sentiment Detection in Arabic (Subtask-1 Sarcasm Detection). The goal of the task is to identify whether a tweet is sarcastic or not. Our solution has been developed using ensemble technique with AraBERT pre-trained model. We describe the architecture of the submitted solution in the shared task. We also provide the experiments and the hyperparameter tuning that lead to this result. Besides, we discuss and analyze the results by comparing all the models that we trained or tested to achieve a better score in a table design. Our model is ranked fifth out of 27 teams with an F1 score of 0.5985. It is worth mentioning that our model achieved the highest accuracy score of 0.7830