Abstract
The widespread success of language models has spurred the natural language processing (NLP) community to tackle tasks demanding implicit and intricate reasoning, drawing upon human-like common-sense mechanisms. While endeavors in vertical thinking tasks have garnered considerable attention, there has been a relative dearth of exploration in lateral thinking puzzles. To address this gap, we introduce BRAINTEASER: a multiple-choice Question Answering task meticulously crafted to evaluate the model’s capacity for lateral thinking and its ability to challenge default common-sense associations. At the SemEval-2024 Task 9, for the first subtask (i.e., Sentence Puzzle) the organizers asked the participants to develop models able to reply to multi-answer brain-teasing questions. For this purpose, we propose the application of a DeBERTa model in a zero-shot configuration. Our proposed approach is able to reach an overall score of 0.250. Suggesting a significant room for improvements in future works.- Anthology ID:
- 2024.semeval-1.45
- Volume:
- Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)
- Month:
- June
- Year:
- 2024
- Address:
- Mexico City, Mexico
- Editors:
- Atul Kr. Ojha, A. Seza Doğruöz, Harish Tayyar Madabushi, Giovanni Da San Martino, Sara Rosenthal, Aiala Rosá
- Venue:
- SemEval
- SIG:
- SIGLEX
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 291–297
- Language:
- URL:
- https://aclanthology.org/2024.semeval-1.45
- DOI:
- 10.18653/v1/2024.semeval-1.45
- Cite (ACL):
- Marco Siino. 2024. DeBERTa at SemEval-2024 Task 9: Using DeBERTa for Defying Common Sense. In Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024), pages 291–297, Mexico City, Mexico. Association for Computational Linguistics.
- Cite (Informal):
- DeBERTa at SemEval-2024 Task 9: Using DeBERTa for Defying Common Sense (Siino, SemEval 2024)
- PDF:
- https://preview.aclanthology.org/nschneid-patch-4/2024.semeval-1.45.pdf