Haojun Fu
2023
Sparse Black-Box Multimodal Attack for Vision-Language Adversary Generation
Zhen Yu
|
Zhou Qin
|
Zhenhua Chen
|
Meihui Lian
|
Haojun Fu
|
Weigao Wen
|
Hui Xue
|
Kun He
Findings of the Association for Computational Linguistics: EMNLP 2023
Deep neural networks have been widely applied in real-world scenarios, such as product restrictions on e-commerce and hate speech monitoring on social media, to ensure secure governance of various platforms. However, illegal merchants often deceive the detection models by adding large-scale perturbations to prohibited products, so as to earn illegal profits. Current adversarial attacks using imperceptible perturbations encounter challenges in simulating such adversarial behavior and evaluating the vulnerabilities of detection models to such perturbations. To address this issue, we propose a novel black-box multimodal attack, termed Sparse Multimodal Attack (SparseMA), which leverages sparse perturbations to simulate the adversarial behavior exhibited by illegal merchants in the black-box scenario. Moreover, SparseMA bridges the gap between images and texts by treating the separated image patches and text words uniformly in the discrete space. Extensive experiments demonstrate that SparseMA can identify the vulnerability of the model to different modalities, outperforming existing multimodal attacks and unimodal attacks. SparseMA, which is the first proposed method for black-box multimodal attacks to our knowledge, would be used as an effective tool for evaluating the robustness of multimodal models to different modalities.
Search
Co-authors
- Zhen Yu 1
- Zhou Qin 1
- Zhenhua Chen 1
- Meihui Lian 1
- Weigao Wen 1
- show all...