Abstract
In this document, we detail our participation experience in SemEval-2024 Task 9: BRAINTEASER-A Novel Task Defying Common Sense. We tackled this challenge by applying fine-tuning techniques with pre-trained models (BERT and RoBERTa Winogrande), while also augmenting the dataset with the LLMs ChatGPT and Gemini. We achieved an accuracy of 0.93 with our best model, along with an F1 score of 0.87 for the Entailment class, 0.94 for the Contradiction class, and 0.96 for the Neutral class- Anthology ID:
- 2024.semeval-1.163
- Volume:
- Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)
- Month:
- June
- Year:
- 2024
- Address:
- Mexico City, Mexico
- Editors:
- Atul Kr. Ojha, A. Seza Doğruöz, Harish Tayyar Madabushi, Giovanni Da San Martino, Sara Rosenthal, Aiala Rosá
- Venue:
- SemEval
- SIG:
- SIGLEX
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 1121–1126
- Language:
- URL:
- https://aclanthology.org/2024.semeval-1.163
- DOI:
- 10.18653/v1/2024.semeval-1.163
- Cite (ACL):
- Cecilia Reyes, Orlando Ramos-flores, and Diego Martínez-maqueda. 2024. IIMAS at SemEval-2024 Task 9: A Comparative Approach for Brainteaser Solutions. In Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024), pages 1121–1126, Mexico City, Mexico. Association for Computational Linguistics.
- Cite (Informal):
- IIMAS at SemEval-2024 Task 9: A Comparative Approach for Brainteaser Solutions (Reyes et al., SemEval 2024)
- PDF:
- https://preview.aclanthology.org/nschneid-patch-4/2024.semeval-1.163.pdf