SubmissionNumber#=%=#95 FinalPaperTitle#=%=#LMEME at SemEval-2024 Task 4: Teacher Student Fusion - Integrating CLIP with LLMs for Enhanced Persuasion Detection ShortPaperTitle#=%=# NumberOfPages#=%=#6 CopyrightSigned#=%=#Shiyi Li, Yike Wang, Liang Yang, Shaowu Zhang and Hongfei Lin JobTitle#==# Organization#==# Abstract#==#This paper describes our system used in the SemEval-2024 Task 4 Multilingual Detection of Persuasion Techniques in Memes. Our team proposes a detection system that employs a Teacher Student Fusion framework. Initially, a Large Language Model serves as the teacher, engaging in abductive reasoning on multimodal inputs to generate background knowledge on persuasion techniques, assisting in the training of a smaller downstream model. The student model adopts CLIP as an encoder for text and image features, and we incorporate an attention mechanism for modality alignment. Ultimately, our proposed system achieves a Macro-F1 score of 0.8103, ranking 1st out of 20 on the leaderboard of Subtask 2b in English. In Bulgarian, Macedonian and Arabic, our detection capabilities are ranked 1/15, 3/15 and 14/15. Author{1}{Firstname}#=%=#Shiyi Author{1}{Lastname}#=%=#Li Author{1}{Username}#=%=#lsylsy Author{1}{Email}#=%=#lishiyieee@mail.dlut.edu.cn Author{1}{Affiliation}#=%=#Dalian University of Technology Author{2}{Firstname}#=%=#Yike Author{2}{Lastname}#=%=#Wang Author{2}{Username}#=%=#yikew Author{2}{Email}#=%=#yike@mail.dlut.edu.cn Author{2}{Affiliation}#=%=#Dalian University of Technology Author{3}{Firstname}#=%=#Liang Author{3}{Lastname}#=%=#Yang Author{3}{Email}#=%=#liang@dlut.edu.cn Author{3}{Affiliation}#=%=#Dalian University of Technology Author{4}{Firstname}#=%=#Shaowu Author{4}{Lastname}#=%=#Zhang Author{4}{Email}#=%=#zhangsw@dlut.edu.cn Author{4}{Affiliation}#=%=#Dalian University of Technology Author{5}{Firstname}#=%=#Hongfei Author{5}{Lastname}#=%=#Lin Author{5}{Email}#=%=#hflin@dlut.edu.cn Author{5}{Affiliation}#=%=#Dalian University of Technology ========== èéáğö