Few-shot Joint Multimodal Aspect-Sentiment Analysis Based on Generative Multimodal Prompt

Xiaocui Yang, Shi Feng, Daling Wang, Qi Sun, Wenfang Wu, Yifei Zhang, Pengfei Hong, Soujanya Poria


Abstract
We have witnessed the rapid proliferation of multimodal data on numerous social media platforms. Conventional studies typically require massive labeled data to train models for Multimodal Aspect-Based Sentiment Analysis (MABSA). However, collecting and annotating fine-grained multimodal data for MABSA is tough. To alleviate the above issue, we perform three MABSA-related tasks with quite a small number of labeled multimodal samples. We first build diverse and comprehensive multimodal few-shot datasets according to the data distribution. To capture the specific prompt for each aspect term in a few-shot scenario, we propose a novel Generative Multimodal Prompt (GMP) model for MABSA, which includes the Multimodal Encoder module and the N-Stream Decoders module. We further introduce a subtask to predict the number of aspect terms in each instance to construct the multimodal prompt. Extensive experiments on two datasets demonstrate that our approach outperforms strong baselines on two MABSA-related tasks in the few-shot setting.
Anthology ID:
2023.findings-acl.735
Volume:
Findings of the Association for Computational Linguistics: ACL 2023
Month:
July
Year:
2023
Address:
Toronto, Canada
Editors:
Anna Rogers, Jordan Boyd-Graber, Naoaki Okazaki
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
11575–11589
Language:
URL:
https://aclanthology.org/2023.findings-acl.735
DOI:
10.18653/v1/2023.findings-acl.735
Bibkey:
Cite (ACL):
Xiaocui Yang, Shi Feng, Daling Wang, Qi Sun, Wenfang Wu, Yifei Zhang, Pengfei Hong, and Soujanya Poria. 2023. Few-shot Joint Multimodal Aspect-Sentiment Analysis Based on Generative Multimodal Prompt. In Findings of the Association for Computational Linguistics: ACL 2023, pages 11575–11589, Toronto, Canada. Association for Computational Linguistics.
Cite (Informal):
Few-shot Joint Multimodal Aspect-Sentiment Analysis Based on Generative Multimodal Prompt (Yang et al., Findings 2023)
Copy Citation:
PDF:
https://preview.aclanthology.org/emnlp22-frontmatter/2023.findings-acl.735.pdf
Video:
 https://preview.aclanthology.org/emnlp22-frontmatter/2023.findings-acl.735.mp4