PALI-NLP at SemEval-2022 Task 6: iSarcasmEval- Fine-tuning the Pre-trained Model for Detecting Intended Sarcasm

Xiyang Du, Dou Hu, Jin Zhi, Lianxin Jiang, Xiaofeng Shi


Abstract
This paper describes the method we utilized in the SemEval-2022 Task 6 iSarcasmEval: Intended Sarcasm Detection In English and Arabic. Our system has achieved 1st in SubtaskB, which is to identify the categories of intended sarcasm. The proposed system integrates multiple BERT-based, RoBERTa-based and BERTweet-based models with finetuning. In this task, we contributed the following: 1) we reveal several large pre-trained models’ performance on tasks coping with the tweet-like text. 2) Our methods prove that we can still achieve excellent results in this particular task without a complex classifier adopting some proper training method. 3) we found there is a hierarchical relationship of sarcasm types in this task.
Anthology ID:
2022.semeval-1.112
Volume:
Proceedings of the 16th International Workshop on Semantic Evaluation (SemEval-2022)
Month:
July
Year:
2022
Address:
Seattle, United States
Venue:
SemEval
SIGs:
SIGLEX | SIGSEM
Publisher:
Association for Computational Linguistics
Note:
Pages:
815–819
Language:
URL:
https://aclanthology.org/2022.semeval-1.112
DOI:
10.18653/v1/2022.semeval-1.112
Bibkey:
Cite (ACL):
Xiyang Du, Dou Hu, Jin Zhi, Lianxin Jiang, and Xiaofeng Shi. 2022. PALI-NLP at SemEval-2022 Task 6: iSarcasmEval- Fine-tuning the Pre-trained Model for Detecting Intended Sarcasm. In Proceedings of the 16th International Workshop on Semantic Evaluation (SemEval-2022), pages 815–819, Seattle, United States. Association for Computational Linguistics.
Cite (Informal):
PALI-NLP at SemEval-2022 Task 6: iSarcasmEval- Fine-tuning the Pre-trained Model for Detecting Intended Sarcasm (Du et al., SemEval 2022)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingestion-script-update/2022.semeval-1.112.pdf
Data
iSarcasmiSarcasmEval