LT3 at SemEval-2020 Task 8: Multi-Modal Multi-Task Learning for Memotion Analysis

Pranaydeep Singh, Nina Bauwelinck, Els Lefever


Abstract
Internet memes have become a very popular mode of expression on social media networks today. Their multi-modal nature, caused by a mixture of text and image, makes them a very challenging research object for automatic analysis. In this paper, we describe our contribution to the SemEval-2020 Memotion Analysis Task. We propose a Multi-Modal Multi-Task learning system, which incorporates “memebeddings”, viz. joint text and vision features, to learn and optimize for all three Memotion subtasks simultaneously. The experimental results show that the proposed system constantly outperforms the competition’s baseline, and the system setup with continual learning (where tasks are trained sequentially) obtains the best classification F1-scores.
Anthology ID:
2020.semeval-1.153
Volume:
Proceedings of the Fourteenth Workshop on Semantic Evaluation
Month:
December
Year:
2020
Address:
Barcelona (online)
Editors:
Aurelie Herbelot, Xiaodan Zhu, Alexis Palmer, Nathan Schneider, Jonathan May, Ekaterina Shutova
Venue:
SemEval
SIG:
SIGLEX
Publisher:
International Committee for Computational Linguistics
Note:
Pages:
1155–1162
Language:
URL:
https://aclanthology.org/2020.semeval-1.153
DOI:
10.18653/v1/2020.semeval-1.153
Bibkey:
Cite (ACL):
Pranaydeep Singh, Nina Bauwelinck, and Els Lefever. 2020. LT3 at SemEval-2020 Task 8: Multi-Modal Multi-Task Learning for Memotion Analysis. In Proceedings of the Fourteenth Workshop on Semantic Evaluation, pages 1155–1162, Barcelona (online). International Committee for Computational Linguistics.
Cite (Informal):
LT3 at SemEval-2020 Task 8: Multi-Modal Multi-Task Learning for Memotion Analysis (Singh et al., SemEval 2020)
Copy Citation:
PDF:
https://preview.aclanthology.org/nschneid-patch-2/2020.semeval-1.153.pdf