MMSD2.0: Towards a Reliable Multi-modal Sarcasm Detection System

Libo Qin, Shijue Huang, Qiguang Chen, Chenran Cai, Yudi Zhang, Bin Liang, Wanxiang Che, Ruifeng Xu


Abstract
Multi-modal sarcasm detection has attracted much recent attention. Nevertheless, the existing benchmark (MMSD) has some shortcomings that hinder the development of reliable multi-modal sarcasm detection system: (1) There are some spurious cues in MMSD, leading to the model bias learning; (2) The negative samples in MMSD are not always reasonable. To solve the aforementioned issues, we introduce MMSD2.0, a correction dataset that fixes the shortcomings of MMSD, by removing the spurious cues and re-annotating the unreasonable samples. Meanwhile, we present a novel framework called multi-view CLIP that is capable of leveraging multi-grained cues from multiple perspectives (i.e., text, image, and text-image interaction view) for multi-modal sarcasm detection. Extensive experiments show that MMSD2.0 is a valuable benchmark for building reliable multi-modal sarcasm detection systems and multi-view CLIP can significantly outperform the previous best baselines.
Anthology ID:
2023.findings-acl.689
Volume:
Findings of the Association for Computational Linguistics: ACL 2023
Month:
July
Year:
2023
Address:
Toronto, Canada
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
10834–10845
Language:
URL:
https://aclanthology.org/2023.findings-acl.689
DOI:
Bibkey:
Cite (ACL):
Libo Qin, Shijue Huang, Qiguang Chen, Chenran Cai, Yudi Zhang, Bin Liang, Wanxiang Che, and Ruifeng Xu. 2023. MMSD2.0: Towards a Reliable Multi-modal Sarcasm Detection System. In Findings of the Association for Computational Linguistics: ACL 2023, pages 10834–10845, Toronto, Canada. Association for Computational Linguistics.
Cite (Informal):
MMSD2.0: Towards a Reliable Multi-modal Sarcasm Detection System (Qin et al., Findings 2023)
Copy Citation:
PDF:
https://preview.aclanthology.org/nodalida-main-page/2023.findings-acl.689.pdf