Amina Gaber


2020

pdf bib
WESSA at SemEval-2020 Task 9: Code-Mixed Sentiment Analysis Using Transformers
Ahmed Sultan | Mahmoud Salim | Amina Gaber | Islam El Hosary
Proceedings of the Fourteenth Workshop on Semantic Evaluation

In this paper, we describe our system submitted for SemEval 2020 Task 9, Sentiment Analysis for Code-Mixed Social Media Text alongside other experiments. Our best performing system is a Transfer Learning-based model that fine-tunes XLM-RoBERTa, a transformer-based multilingual masked language model, on monolingual English and Spanish data and Spanish-English code-mixed data. Our system outperforms the official task baseline by achieving a 70.1% average F1-Score on the official leaderboard using the test set. For later submissions, our system manages to achieve a 75.9% average F1-Score on the test set using CodaLab username “ahmed0sultan”.