Kejsi Take


2024

pdf
RiddleMasters at SemEval-2024 Task 9: Comparing Instruction Fine-tuning with Zero-Shot Approaches
Kejsi Take | Chau Tran
Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)

This paper describes our contribution to SemEval 2023 Task 8: Brainteaser. We compared multiple zero-shot approaches using GPT-4, the state of the art model with Mistral-7B, a much smaller open-source LLM. While GPT-4 remains a clear winner in all the zero-shot approaches, we show that finetuning Mistral-7B can achieve comparable, even though marginally lower results.
Search
Co-authors
Venues