SubmissionNumber#=%=#209 FinalPaperTitle#=%=#RiddleMasters at SemEval-2024 Task 9: Comparing Instruction Fine-tuning with Zero-Shot Approaches ShortPaperTitle#=%=# NumberOfPages#=%=#6 CopyrightSigned#=%=#Kejsi Take JobTitle#==# Organization#==# Abstract#==#This paper describes our contribution to SemEval 2023 Task 8: Brainteaser. We compared multiple zero-shot approaches using GPT-4, the state of the art model with Mistral-7B, a much smaller open-source LLM. While GPT-4 remains a clear winner in all the zero-shot approaches, we show that finetuning Mistral-7B can achieve comparable, even though marginally lower results. Author{1}{Firstname}#=%=#Kejsi Author{1}{Lastname}#=%=#Take Author{1}{Username}#=%=#kejsit Author{1}{Email}#=%=#kejsitake@nyu.edu Author{1}{Affiliation}#=%=#New York University Author{2}{Firstname}#=%=#Chau Author{2}{Lastname}#=%=#Tran Author{2}{Email}#=%=#cpt289@nyu.edu Author{2}{Affiliation}#=%=#New York University ========== èéáğö