SubmissionNumber#=%=#33 FinalPaperTitle#=%=#OUNLP at SemEval-2024 Task 9: Retrieval-Augmented Generation for Solving Brain Teasers with LLMs ShortPaperTitle#=%=# NumberOfPages#=%=#7 CopyrightSigned#=%=#Vineet Saravanan JobTitle#==# Organization#==# Abstract#==#The advancement of natural language processing has given rise to a variety of large language models (LLMs) with capabilities extending into the realm of complex problem-solving, including brainteasers that challenge not only linguistic fluency but also logical reasoning. This paper documents our submission to the SemEval 2024 Brainteaser task, in which we investigate the performance of state-of-the-art LLMs, such as GPT-3.5, GPT-4, and the Gemini model, on a diverse set of brainteasers using prompt engineering as a tool to enhance the models' problem-solving abilities. We experimented with a series of structured prompts ranging from basic to those integrating task descriptions and explanations. Through a comparative analysis, we sought to determine which combinations of model and prompt yielded the highest accuracy in solving these puzzles. Our findings provide a snapshot of the current landscape of AI problem-solving and highlight the nuanced nature of LLM performance, influenced by both the complexity of the tasks and the sophistication of the prompts employed. Author{1}{Firstname}#=%=#Vineet Author{1}{Lastname}#=%=#Saravanan Author{1}{Username}#=%=#vineetsaravanan Author{1}{Email}#=%=#vineetsaravanan@gmail.com Author{1}{Affiliation}#=%=#Student Author{2}{Firstname}#=%=#Steven R. Author{2}{Lastname}#=%=#Wilson Author{2}{Username}#=%=#steverw Author{2}{Email}#=%=#stevenwilson@oakland.edu Author{2}{Affiliation}#=%=#Oakland University ========== èéáğö