BERT’s The Word : Sarcasm Target Detection using BERT
Pradeesh Parameswaran, Andrew Trotman, Veronica Liesaputra, David Eyers
Abstract
In 2019, the Australasian Language Technology Association (ALTA) organised a shared task to detect the target of sarcastic comments posted on social media. However, there were no winners as it proved to be a difficult task. In this work, we revisit the task posted by ALTA by using transformers, specifically BERT, given the current success of the transformer-based model in various NLP tasks. We conducted our experiments on two BERT models (TD-BERT and BERT-AEN). We evaluated our model on the data set provided by ALTA (Reddit) and two additional data sets: ‘book snippets’ and ‘Tweets’. Our results show that our proposed method achieves a 15.2% improvement from the current state-of-the-art system on the Reddit data set and 4% improvement on Tweets.- Anthology ID:
- 2021.alta-1.21
- Volume:
- Proceedings of the The 19th Annual Workshop of the Australasian Language Technology Association
- Month:
- December
- Year:
- 2021
- Address:
- Online
- Venue:
- ALTA
- SIG:
- Publisher:
- Australasian Language Technology Association
- Note:
- Pages:
- 185–191
- Language:
- URL:
- https://aclanthology.org/2021.alta-1.21
- DOI:
- Cite (ACL):
- Pradeesh Parameswaran, Andrew Trotman, Veronica Liesaputra, and David Eyers. 2021. BERT’s The Word : Sarcasm Target Detection using BERT. In Proceedings of the The 19th Annual Workshop of the Australasian Language Technology Association, pages 185–191, Online. Australasian Language Technology Association.
- Cite (Informal):
- BERT’s The Word : Sarcasm Target Detection using BERT (Parameswaran et al., ALTA 2021)
- PDF:
- https://preview.aclanthology.org/ingestion-script-update/2021.alta-1.21.pdf