SubmissionNumber#=%=#230 FinalPaperTitle#=%=#ALF at SemEval-2024 Task 9: Exploring Lateral Thinking Capabilities of LMs through Multi-task Fine-tuning ShortPaperTitle#=%=# NumberOfPages#=%=#6 CopyrightSigned#=%=#Seyed Ali Farokh JobTitle#==# Organization#==#Amirkabir University of Technology, Tehran, Iran Abstract#==#Recent advancements in natural language processing (NLP) have prompted the development of sophisticated reasoning benchmarks. This paper presents our system for the SemEval 2024 Task 9 competition and also investigates the efficacy of fine-tuning language models (LMs) on BrainTeaser—a benchmark designed to evaluate NLP models' lateral thinking and creative reasoning abilities. Our experiments focus on two prominent families of pre-trained models, BERT and T5. Additionally, we explore the potential benefits of multi-task fine-tuning on commonsense reasoning datasets to enhance performance. Our top-performing model, DeBERTa-v3-large, achieves an impressive overall accuracy of 93.33%, surpassing human performance. Author{1}{Firstname}#=%=#Seyed Ali Author{1}{Lastname}#=%=#Farokh Author{1}{Username}#=%=#alifarokh Author{1}{Email}#=%=#alifarokh@aut.ac.ir Author{1}{Affiliation}#=%=#Amirkabir University of Technology Author{2}{Firstname}#=%=#Hossein Author{2}{Lastname}#=%=#Zeinali Author{2}{Username}#=%=#zeinali Author{2}{Email}#=%=#hzeinali@aut.ac.ir Author{2}{Affiliation}#=%=#Amirkabir University of Technology ========== èéáğö