Abstract
In this paper, we present a possible solution to the SemEval23 shared task of generating spoilers for clickbait headlines. Using a Zero-Shot approach with two different Transformer architectures, BLOOM and RoBERTa, we generate three different types of spoilers: phrase, passage and multi. We found, RoBERTa pretrained for Question-Answering to perform better than BLOOM for causal language modelling, however both architectures proved promising for future attempts at such tasks.- Anthology ID:
- 2023.semeval-1.66
- Volume:
- Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)
- Month:
- July
- Year:
- 2023
- Address:
- Toronto, Canada
- Editors:
- Atul Kr. Ojha, A. Seza Doğruöz, Giovanni Da San Martino, Harish Tayyar Madabushi, Ritesh Kumar, Elisa Sartori
- Venue:
- SemEval
- SIG:
- SIGLEX
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 477–481
- Language:
- URL:
- https://aclanthology.org/2023.semeval-1.66
- DOI:
- 10.18653/v1/2023.semeval-1.66
- Cite (ACL):
- Niels Krog and Manex Agirrezabal. 2023. Diane Simmons at SemEval-2023 Task 5: Is it possible to make good clickbait spoilers using a Zero-Shot approach? Check it out!. In Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023), pages 477–481, Toronto, Canada. Association for Computational Linguistics.
- Cite (Informal):
- Diane Simmons at SemEval-2023 Task 5: Is it possible to make good clickbait spoilers using a Zero-Shot approach? Check it out! (Krog & Agirrezabal, SemEval 2023)
- PDF:
- https://preview.aclanthology.org/nschneid-patch-1/2023.semeval-1.66.pdf