Abstract
We present a transformer-based sarcasm detection model that accounts for the context from the entire conversation thread for more robust predictions. Our model uses deep transformer layers to perform multi-head attentions among the target utterance and the relevant context in the thread. The context-aware models are evaluated on two datasets from social media, Twitter and Reddit, and show 3.1% and 7.0% improvements over their baselines. Our best models give the F1-scores of 79.0% and 75.0% for the Twitter and Reddit datasets respectively, becoming one of the highest performing systems among 36 participants in this shared task.- Anthology ID:
- 2020.figlang-1.38
- Volume:
- Proceedings of the Second Workshop on Figurative Language Processing
- Month:
- July
- Year:
- 2020
- Address:
- Online
- Editors:
- Beata Beigman Klebanov, Ekaterina Shutova, Patricia Lichtenstein, Smaranda Muresan, Chee Wee, Anna Feldman, Debanjan Ghosh
- Venue:
- Fig-Lang
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 276–280
- Language:
- URL:
- https://aclanthology.org/2020.figlang-1.38
- DOI:
- 10.18653/v1/2020.figlang-1.38
- Cite (ACL):
- Xiangjue Dong, Changmao Li, and Jinho D. Choi. 2020. Transformer-based Context-aware Sarcasm Detection in Conversation Threads from Social Media. In Proceedings of the Second Workshop on Figurative Language Processing, pages 276–280, Online. Association for Computational Linguistics.
- Cite (Informal):
- Transformer-based Context-aware Sarcasm Detection in Conversation Threads from Social Media (Dong et al., Fig-Lang 2020)
- PDF:
- https://preview.aclanthology.org/nschneid-patch-1/2020.figlang-1.38.pdf