Abstract
Social media is daily creating massive multimedia content with paired image and text, presenting the pressing need to automate the vision and language understanding for various multimodal classification tasks. Compared to the commonly researched visual-lingual data, social media posts tend to exhibit more implicit image-text relations. To better glue the cross-modal semantics therein, we capture hinting features from user comments, which are retrieved via jointly leveraging visual and lingual similarity. Afterwards, the classification tasks are explored via self-training in a teacher-student framework, motivated by the usually limited labeled data scales in existing benchmarks. Substantial experiments are conducted on four multimodal social media benchmarks for image-text relation classification, sarcasm detection, sentiment classification, and hate speech detection. The results show that our method further advances the performance of previous state-of-the-art models, which do not employ comment modeling or self-training.- Anthology ID:
- 2022.emnlp-main.381
- Volume:
- Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
- Month:
- December
- Year:
- 2022
- Address:
- Abu Dhabi, United Arab Emirates
- Editors:
- Yoav Goldberg, Zornitsa Kozareva, Yue Zhang
- Venue:
- EMNLP
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 5644–5656
- Language:
- URL:
- https://aclanthology.org/2022.emnlp-main.381
- DOI:
- 10.18653/v1/2022.emnlp-main.381
- Cite (ACL):
- Chunpu Xu and Jing Li. 2022. Borrowing Human Senses: Comment-Aware Self-Training for Social Media Multimodal Classification. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 5644–5656, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
- Cite (Informal):
- Borrowing Human Senses: Comment-Aware Self-Training for Social Media Multimodal Classification (Xu & Li, EMNLP 2022)
- PDF:
- https://preview.aclanthology.org/ingest-acl-2023-videos/2022.emnlp-main.381.pdf