SubmissionNumber#=%=#267 FinalPaperTitle#=%=#uTeBC-NLP at SemEval-2024 Task 9: Can LLMs be Lateral Thinkers? ShortPaperTitle#=%=# NumberOfPages#=%=#12 CopyrightSigned#=%=#Pouya Sadeghi JobTitle#==# Organization#==#School of Electrical and Computer Engineering, College of Engineering University of Tehran, Tehran, Iran Abstract#==#Inspired by human cognition, Jiang et al. 2023 create a benchmark for assessing LLMs' lateral thinking—thinking outside the box. Building upon this benchmark, we investigate how different prompting methods enhance LLMs' performance on this task to reveal their inherent power for outside-the-box thinking ability. Through participating in SemEval-2024, task 9, Sentence Puzzle sub-task, we explore prompt engineering methods: chain of thoughts (CoT) and direct prompting, enhancing with informative descriptions, and employing contextualizing prompts using a retrieval augmented generation (RAG) pipeline. Our experiments involve three LLMs including GPT-3.5, GPT-4, and Zephyr-7B-beta. We generate a dataset of thinking paths between riddles and options using GPT-4, validated by humans for quality. Findings indicate that compressed informative prompts enhance performance. Dynamic in-context learning enhances model performance significantly. Furthermore, fine-tuning Zephyr on our dataset enhances performance across other commonsense datasets, underscoring the value of innovative thinking. Author{1}{Firstname}#=%=#Pouya Author{1}{Lastname}#=%=#Sadeghi Author{1}{Username}#=%=#ipouyall Author{1}{Email}#=%=#pouyasadeghi2012@gmail.com Author{1}{Affiliation}#=%=#Computer Engineering Student Author{2}{Firstname}#=%=#Amirhossein Author{2}{Lastname}#=%=#Abaskohi Author{2}{Username}#=%=#amirhosseinabaskohi Author{2}{Email}#=%=#amirhossein.abaskohi@gmail.com Author{2}{Affiliation}#=%=#Master of Computer Science Student at the University of British Columbia Author{3}{Firstname}#=%=#Yadollah Author{3}{Lastname}#=%=#Yaghoobzadeh Author{3}{Username}#=%=#yyaghoobzadeh Author{3}{Email}#=%=#y.yaghoobzadeh@gmail.com Author{3}{Affiliation}#=%=#University of Tehran ========== èéáğö