SubmissionNumber#=%=#11 FinalPaperTitle#=%=#Puer at SemEval-2024 Task 4: Fine-tuning Pre-trained Language Models for Meme Persuasion Technique Detection ShortPaperTitle#=%=# NumberOfPages#=%=#6 CopyrightSigned#=%=#Jiaxu Dao JobTitle#==# Organization#==# Abstract#==#The paper summarizes our research on multilingual detection of persuasion techniques in memes for the SemEval-2024 Task 4. Our work focused on English-Subtask 1, implemented based on a roberta-large pre-trained model provided by the transforms tool that was fine-tuned into a corpus of social media posts. Our method significantly outperforms the officially released baseline method, and ranked 7th in English-Subtask 1 for the test set. This paper also compares the performances of different deep learning model architectures, such as BERT, ALBERT, and XLM-RoBERTa, on multilingual detection of persuasion techniques in memes. The experimental source code covered in the paper will later be sourced from Github. Author{1}{Firstname}#=%=#Jiaxu Author{1}{Lastname}#=%=#Dao Author{1}{Email}#=%=#daojiaxu@peu.edu.cn Author{1}{Affiliation}#=%=#Pu'er University Author{2}{Firstname}#=%=#Zhuoying Author{2}{Lastname}#=%=#Li Author{2}{Email}#=%=#lizhuoying@peu.edu.cn Author{2}{Affiliation}#=%=#Pu'er University Author{3}{Firstname}#=%=#Youbang Author{3}{Lastname}#=%=#Su Author{3}{Email}#=%=#suyoubang@peu.edu.cn Author{3}{Affiliation}#=%=#Pu'er University Author{4}{Firstname}#=%=#Wensheng Author{4}{Lastname}#=%=#Gong Author{4}{Email}#=%=#gongwensheng@peu.edu.cn Author{4}{Affiliation}#=%=#Pu'er University ========== èéáğö