@inproceedings{bashmal-alzeer-2021-arsarcasm,
    title = "{A}r{S}arcasm Shared Task: An Ensemble {BERT} Model for {S}arcasm{D}etection in {A}rabic Tweets",
    author = "Bashmal, Laila  and
      AlZeer, Daliyah",
    editor = "Habash, Nizar  and
      Bouamor, Houda  and
      Hajj, Hazem  and
      Magdy, Walid  and
      Zaghouani, Wajdi  and
      Bougares, Fethi  and
      Tomeh, Nadi  and
      Abu Farha, Ibrahim  and
      Touileb, Samia",
    booktitle = "Proceedings of the Sixth Arabic Natural Language Processing Workshop",
    month = apr,
    year = "2021",
    address = "Kyiv, Ukraine (Virtual)",
    publisher = "Association for Computational Linguistics",
    url = "https://preview.aclanthology.org/ingest-emnlp/2021.wanlp-1.40/",
    pages = "323--328",
    abstract = "Detecting Sarcasm has never been easy for machines to process. In this work, we present our submission of the sub-task1 of the shared task on sarcasm and sentiment detection in Arabic organized by the 6th Workshop for Arabic Natural Language Processing. In this work, we explored different approaches based on BERT models. First, we fine-tuned the AraBERTv02 model for the sarcasm detection task. Then, we used the Sentence-BERT model trained with contrastive learning to extract representative tweet embeddings. Finally, inspired by how the human brain comprehends the surface and the implicit meanings of sarcastic tweets, we combined the sentence embedding with the fine-tuned AraBERTv02 to further boost the performance of the model. Through the ensemble of the two models, our team ranked 5th out of 27 teams on the shared task of sarcasm detection in Arabic, with an F1-score of {\%}59.89 on the official test data. The obtained result is {\%}2.36 lower than the 1st place which confirms the capabilities of the employed combined model in detecting sarcasm."
}Markdown (Informal)
[ArSarcasm Shared Task: An Ensemble BERT Model for SarcasmDetection in Arabic Tweets](https://preview.aclanthology.org/ingest-emnlp/2021.wanlp-1.40/) (Bashmal & AlZeer, WANLP 2021)
ACL