SubmissionNumber#=%=#133 FinalPaperTitle#=%=#GreyBox at SemEval-2024 Task 4: Progressive Fine-tuning (for Multilingual Detection of Propaganda Techniques) ShortPaperTitle#=%=# NumberOfPages#=%=#6 CopyrightSigned#=%=#Nathan Roll JobTitle#==# Organization#==# Abstract#==#We introduce a novel fine-tuning approach that effectively primes transformer-based language models to detect rhetorical and psychological techniques within internet memes. Our end-to-end system retains multilingual and task-general capacities from pretraining stages while adapting to domain intricacies using an increasingly targeted set of examples-- achieving competitive rankings across English, Bulgarian, and North Macedonian. We find that our monolingual post-training regimen is sufficient to improve task performance in 17 language varieties beyond equivalent zero-shot capabilities despite English-only data. To promote further research, we release our code publicly on GitHub. Author{1}{Firstname}#=%=#Nathan Author{1}{Lastname}#=%=#Roll Author{1}{Username}#=%=#nathanroll Author{1}{Email}#=%=#nroll@ucsb.edu Author{1}{Affiliation}#=%=#UCSantaBarbara Author{2}{Firstname}#=%=#Calbert Author{2}{Lastname}#=%=#Graham Author{2}{Username}#=%=#calb Author{2}{Email}#=%=#crg29@cam.ac.uk Author{2}{Affiliation}#=%=#University of Cambridge ========== èéáğö