@inproceedings{gu-etal-2022-mmvae,
    title = "{MMVAE} at {S}em{E}val-2022 Task 5: A Multi-modal Multi-task {VAE} on Misogynous Meme Detection",
    author = "Gu, Yimeng  and
      Castro, Ignacio  and
      Tyson, Gareth",
    editor = "Emerson, Guy  and
      Schluter, Natalie  and
      Stanovsky, Gabriel  and
      Kumar, Ritesh  and
      Palmer, Alexis  and
      Schneider, Nathan  and
      Singh, Siddharth  and
      Ratan, Shyam",
    booktitle = "Proceedings of the 16th International Workshop on Semantic Evaluation (SemEval-2022)",
    month = jul,
    year = "2022",
    address = "Seattle, United States",
    publisher = "Association for Computational Linguistics",
    url = "https://preview.aclanthology.org/ingest-emnlp/2022.semeval-1.96/",
    doi = "10.18653/v1/2022.semeval-1.96",
    pages = "700--710",
    abstract = "Nowadays, memes have become quite common in day-to-day communications on social media platforms. They appear to be amusing, evoking and attractive to audiences. However, some memes containing malicious contents can be harmful to the targeted group and arouse public anger in the long run. In this paper, we study misogynous meme detection, a shared task in SemEval 2022 - Multimedia Automatic Misogyny Identification (MAMI). The challenge of misogynous meme detection is to co-represent multi-modal features. To tackle with this challenge, we propose a Multi-modal Multi-task Variational AutoEncoder (MMVAE) to learn an effective co-representation of visual and textual features in the latent space, and determine if the meme contains misogynous information and identify its fine-grained categories. Our model achieves 0.723 on sub-task A and 0.634 on sub-task B in terms of $F_{1}$ scores. We carry out comprehensive experiments on our model{'}s architecture and show that our approach significantly outperforms several strong uni-modal and multi-modal approaches. Our code is released on github."
}Markdown (Informal)
[MMVAE at SemEval-2022 Task 5: A Multi-modal Multi-task VAE on Misogynous Meme Detection](https://preview.aclanthology.org/ingest-emnlp/2022.semeval-1.96/) (Gu et al., SemEval 2022)
ACL