RiddleMasters at SemEval-2024 Task 9: Comparing Instruction Fine-tuning with Zero-Shot Approaches

Kejsi Take, Chau Tran


Abstract
This paper describes our contribution to SemEval 2023 Task 8: Brainteaser. We compared multiple zero-shot approaches using GPT-4, the state of the art model with Mistral-7B, a much smaller open-source LLM. While GPT-4 remains a clear winner in all the zero-shot approaches, we show that finetuning Mistral-7B can achieve comparable, even though marginally lower results.
Anthology ID:
2024.semeval-1.200
Volume:
Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)
Month:
June
Year:
2024
Address:
Mexico City, Mexico
Editors:
Atul Kr. Ojha, A. Seza Doğruöz, Harish Tayyar Madabushi, Giovanni Da San Martino, Sara Rosenthal, Aiala Rosá
Venue:
SemEval
SIG:
SIGLEX
Publisher:
Association for Computational Linguistics
Note:
Pages:
1391–1396
Language:
URL:
https://aclanthology.org/2024.semeval-1.200
DOI:
10.18653/v1/2024.semeval-1.200
Bibkey:
Cite (ACL):
Kejsi Take and Chau Tran. 2024. RiddleMasters at SemEval-2024 Task 9: Comparing Instruction Fine-tuning with Zero-Shot Approaches. In Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024), pages 1391–1396, Mexico City, Mexico. Association for Computational Linguistics.
Cite (Informal):
RiddleMasters at SemEval-2024 Task 9: Comparing Instruction Fine-tuning with Zero-Shot Approaches (Take & Tran, SemEval 2024)
Copy Citation:
PDF:
https://preview.aclanthology.org/nschneid-patch-4/2024.semeval-1.200.pdf
Supplementary material:
 2024.semeval-1.200.SupplementaryMaterial.txt