BAMO at SemEval-2024 Task 9: BRAINTEASER: A Novel Task Defying Common Sense
Baktash Ansari, Mohammadmostafa Rostamkhani, Sauleh Eetemadi
Abstract
This paper outlines our approach to SemEval 2024 Task 9, BRAINTEASER: A Novel Task Defying Common Sense. The task aims to evaluate the ability of language models to think creatively. The dataset comprises multi-choice questions that challenge models to think ‘outside of the box’. We fine-tune 2 models, BERT and RoBERTa Large. Next, we employ a Chain of Thought (CoT) zero-shot prompting approach with 6 large language models, such as GPT-3.5, Mixtral, and Llama2. Finally, we utilize ReConcile, a technique that employs a ‘round table conference’ approach with multiple agents for zero-shot learning, to generate consensus answers among 3 selected language models. Our best method achieves an overall accuracy of 85 percent on the sentence puzzles subtask.- Anthology ID:
- 2024.semeval-1.35
- Volume:
- Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)
- Month:
- June
- Year:
- 2024
- Address:
- Mexico City, Mexico
- Editors:
- Atul Kr. Ojha, A. Seza Doğruöz, Harish Tayyar Madabushi, Giovanni Da San Martino, Sara Rosenthal, Aiala Rosá
- Venue:
- SemEval
- SIG:
- SIGLEX
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 224–232
- Language:
- URL:
- https://aclanthology.org/2024.semeval-1.35
- DOI:
- 10.18653/v1/2024.semeval-1.35
- Cite (ACL):
- Baktash Ansari, Mohammadmostafa Rostamkhani, and Sauleh Eetemadi. 2024. BAMO at SemEval-2024 Task 9: BRAINTEASER: A Novel Task Defying Common Sense. In Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024), pages 224–232, Mexico City, Mexico. Association for Computational Linguistics.
- Cite (Informal):
- BAMO at SemEval-2024 Task 9: BRAINTEASER: A Novel Task Defying Common Sense (Ansari et al., SemEval 2024)
- PDF:
- https://preview.aclanthology.org/nschneid-patch-4/2024.semeval-1.35.pdf