Jiansheng Chen
2025
The Security Threat of Compressed Projectors in Large Vision-Language Models
Yudong Zhang
|
Ruobing Xie
|
Xingwu Sun
|
Jiansheng Chen
|
Zhanhui Kang
|
Di Wang
|
Yu Wang
Findings of the Association for Computational Linguistics: EMNLP 2025
The choice of a suitable visual language projector (VLP) is critical to the successful training of large visual language models (LVLMs). Mainstream VLPs can be broadly categorized into compressed and uncompressed projectors, and each offers distinct advantages in performance and computational efficiency. However, their security implications have not been thoroughly examined. Our comprehensive evaluation reveals significant differences in their security profiles: compressed projectors exhibit substantial vulnerabilities, allowing adversaries to successfully compromise LVLMs even with minimal knowledge of structure information. In stark contrast, uncompressed projectors demonstrate robust security properties and do not introduce additional vulnerabilities. These findings provide critical guidance for researchers in selecting optimal VLPs that enhance the security and reliability of visual language models. The code is available at https://github.com/btzyd/TCP.
QAVA: Query-Agnostic Visual Attack to Large Vision-Language Models
Yudong Zhang
|
Ruobing Xie
|
Jiansheng Chen
|
Xingwu Sun
|
Zhanhui Kang
|
Yu Wang
Proceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)
In typical multimodal tasks, such as Visual Question Answering (VQA), adversarial attacks targeting a specific image and question can lead large vision-language models (LVLMs) to provide incorrect answers. However, it is common for a single image to be associated with multiple questions, and LVLMs may still answer other questions correctly even for an adversarial image attacked by a specific question. To address this, we introduce the query-agnostic visual attack (QAVA), which aims to create robust adversarial examples that generate incorrect responses to unspecified and unknown questions. Compared to traditional adversarial attacks focused on specific images and questions, QAVA significantly enhances the effectiveness and efficiency of attacks on images when the question is unknown, achieving performance comparable to attacks on known target questions. Our research broadens the scope of visual adversarial attacks on LVLMs in practical settings, uncovering previously overlooked vulnerabilities, particularly in the context of visual adversarial threats. The code is available at https://github.com/btzyd/qava.
Search
Fix author
Co-authors
- Zhanhui Kang 2
- Xingwu Sun 2
- Yu Wang (王雨) 2
- Ruobing Xie 2
- Yudong Zhang 2
- show all...
- Di Wang 1