Context-Aware Sarcasm Detection Using BERT

Arup Baruah, Kaushik Das, Ferdous Barbhuiya, Kuntal Dey


Abstract
In this paper, we present the results obtained by BERT, BiLSTM and SVM classifiers on the shared task on Sarcasm Detection held as part of The Second Workshop on Figurative Language Processing. The shared task required the use of conversational context to detect sarcasm. We experimented by varying the amount of context used along with the response (response is the text to be classified). The amount of context used includes (i) zero context, (ii) last one, two or three utterances, and (iii) all utterances. It was found that including the last utterance in the dialogue along with the response improved the performance of the classifier for the Twitter data set. On the other hand, the best performance for the Reddit data set was obtained when using only the response without any contextual information. The BERT classifier obtained F-score of 0.743 and 0.658 for the Twitter and Reddit data set respectively.
Anthology ID:
2020.figlang-1.12
Volume:
Proceedings of the Second Workshop on Figurative Language Processing
Month:
July
Year:
2020
Address:
Online
Editors:
Beata Beigman Klebanov, Ekaterina Shutova, Patricia Lichtenstein, Smaranda Muresan, Chee Wee, Anna Feldman, Debanjan Ghosh
Venue:
Fig-Lang
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
83–87
Language:
URL:
https://aclanthology.org/2020.figlang-1.12
DOI:
10.18653/v1/2020.figlang-1.12
Bibkey:
Cite (ACL):
Arup Baruah, Kaushik Das, Ferdous Barbhuiya, and Kuntal Dey. 2020. Context-Aware Sarcasm Detection Using BERT. In Proceedings of the Second Workshop on Figurative Language Processing, pages 83–87, Online. Association for Computational Linguistics.
Cite (Informal):
Context-Aware Sarcasm Detection Using BERT (Baruah et al., Fig-Lang 2020)
Copy Citation:
PDF:
https://preview.aclanthology.org/nschneid-patch-2/2020.figlang-1.12.pdf
Video:
 http://slideslive.com/38929702