SubmissionNumber#=%=#25 FinalPaperTitle#=%=#Data-Augmentation-Based Dialectal Adaptation for LLMs ShortPaperTitle#=%=# NumberOfPages#=%=#12 CopyrightSigned#=%=# JobTitle#==# Organization#==# Abstract#==#This report presents gmnlp's participation to the Dialect-Copa shared task at VarDial 2024~\cite{chifu-etal-2024-vardial}, which focuses on evaluating the commonsense reasoning capabilities of large language models (LLMs) on South Slavic micro-dialects. The task aims to assess how well LLMs can handle non-standard dialectal varieties, as their performance on standard languages is already well-established. We propose an approach that combines the strengths of different types of language models and leverages data augmentation techniques to improve task performance on three South Slavic dialects: Chakavian, Cherkano, and Torlak. We conduct experiments using a language-family-focused encoder-based model (BERTić) and a domain-agnostic multilingual model (AYA-101). Our results demonstrate that the proposed data augmentation techniques lead to substantial performance gains across all three test datasets in the open-source model category. This work highlights the practical utility of data augmentation and the potential of LLMs in handling non-standard dialectal varieties, contributing to the broader goal of advancing natural language understanding in low-resource and dialectal settings. Author{1}{Firstname}#=%=#Fahim Author{1}{Lastname}#=%=#Faisal Author{1}{Username}#=%=#ffaisal Author{1}{Email}#=%=#ffaisal@gmu.edu Author{1}{Affiliation}#=%=#George Mason University Author{2}{Firstname}#=%=#Antonios Author{2}{Lastname}#=%=#Anastasopoulos Author{2}{Username}#=%=#aanastas Author{2}{Email}#=%=#antonis@gmu.edu Author{2}{Affiliation}#=%=#George Mason University ========== èéáğö