Abstract
Clickbait spoiling and spoiler type classification in the setting of the SemEval2023 shared task five was used to explore transformer based text classification in comparison to conventional, shallow learned classifying models. Additionally, an initial model for spoiler creation was explored. The task was to classify or create spoilers for clickbait social media posts. The classification task was addressed by comparing different classifiers trained on hand crafted features to pre-trained and fine-tuned RoBERTa transformer models. The spoiler generation task was formulated as a question answering task, using the clickbait posts as questions and the articles as foundation to retrieve the answer from. The results show that even of the shelve transformer models outperform shallow learned models in the classification task. The spoiler generation task is more complex and needs an advanced system.- Anthology ID:
- 2023.semeval-1.238
- Volume:
- Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)
- Month:
- July
- Year:
- 2023
- Address:
- Toronto, Canada
- Editors:
- Atul Kr. Ojha, A. Seza Doğruöz, Giovanni Da San Martino, Harish Tayyar Madabushi, Ritesh Kumar, Elisa Sartori
- Venue:
- SemEval
- SIG:
- SIGLEX
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 1712–1717
- Language:
- URL:
- https://aclanthology.org/2023.semeval-1.238
- DOI:
- 10.18653/v1/2023.semeval-1.238
- Cite (ACL):
- Jüri Keller, Nicolas Rehbach, and Ibrahim Zafar. 2023. nancy-hicks-gribble at SemEval-2023 Task 5: Classifying and generating clickbait spoilers with RoBERTa. In Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023), pages 1712–1717, Toronto, Canada. Association for Computational Linguistics.
- Cite (Informal):
- nancy-hicks-gribble at SemEval-2023 Task 5: Classifying and generating clickbait spoilers with RoBERTa (Keller et al., SemEval 2023)
- PDF:
- https://preview.aclanthology.org/proper-vol2-ingestion/2023.semeval-1.238.pdf