SubmissionNumber#=%=#181 FinalPaperTitle#=%=#EURECOM at SemEval-2024 Task 4: Hierarchical Loss and Model Ensembling in Detecting Persuasion Techniques ShortPaperTitle#=%=# NumberOfPages#=%=#6 CopyrightSigned#=%=#Youri Peskine JobTitle#==# Organization#==#EURECOM, 450 Route des Chappes, 06410 Biot, France Abstract#==#This paper describes the submission of team EURECOM at SemEval-2024 Task 4: Multilingual Detection of Persuasion Techniques in Memes. We only tackled the first sub-task, consisting of detecting 20 named persuasion techniques in the textual content of memes. We trained multiple BERT-based models (BERT, RoBERTa, BERT pre-trained on harmful detection) using different losses (Cross Entropy, Binary Cross Entropy, Focal Loss and a custom-made hierarchical loss). The best results were obtained by leveraging the hierarchical nature of the data, by outputting ancestor classes and with a hierarchical loss. Our final submission consist of an ensembling of our top-3 best models for each persuasion techniques. We obtain hierarchical F1 scores of 0.655 (English), 0.345 (Bulgarian), 0.442 (North Macedonian) and 0.178 (Arabic) on the test set. Author{1}{Firstname}#=%=#Youri Author{1}{Lastname}#=%=#Peskine Author{1}{Username}#=%=#ypesk Author{1}{Email}#=%=#youri.peskine@eurecom.fr Author{1}{Affiliation}#=%=#EURECOM Author{2}{Firstname}#=%=#Raphael Author{2}{Lastname}#=%=#Troncy Author{2}{Username}#=%=#troncy Author{2}{Email}#=%=#raphael.troncy@eurecom.fr Author{2}{Affiliation}#=%=#EURECOM Author{3}{Firstname}#=%=#Paolo Author{3}{Lastname}#=%=#Papotti Author{3}{Email}#=%=#paolo.papotti@eurecom.fr Author{3}{Affiliation}#=%=#EURECOM ========== èéáğö