Making Every Step Effective: Jailbreaking Large Vision-Language Models Through Hierarchical KV Equalization

Shuyang Hao, Yiwei Wang, Bryan Hooi, Jun Liu, Muhao Chen, Zi Huang, Yujun Cai


Abstract
In the realm of large vision-language models (LVLMs), adversarial jailbreak attacks serve as a red-teaming approach to identify safety vulnerabilities of these models and their associated defense mechanisms. However, we identify a critical limitation: not every adversarial optimization step leads to a positive outcome, and indiscriminately accepting optimization results at each step may reduce the overall attack success rate. To address this challenge, we introduce HKVE (Hierarchical Key-Value Equalization), an innovative jailbreaking framework that selectively accepts gradient optimization results based on the distribution of attention scores across different layers, ensuring that every optimization step positively contributes to the attack. Extensive experiments demonstrate HKVE’s significant effectiveness, achieving attack success rates of 75.08% on MiniGPT4, 85.84% on LLaVA and 81.00% on Qwen-VL, substantially outperforming existing methods by margins of 20.43%, 21.01% and 26.43% respectively. Furthermore, making every step effective not only leads to an increase in attack success rate but also allows for a reduction in the number of iterations, thereby lowering computational costs.
Anthology ID:
2025.findings-emnlp.618
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2025
Month:
November
Year:
2025
Address:
Suzhou, China
Editors:
Christos Christodoulopoulos, Tanmoy Chakraborty, Carolyn Rose, Violet Peng
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
11528–11543
Language:
URL:
https://preview.aclanthology.org/ingest-luhme/2025.findings-emnlp.618/
DOI:
10.18653/v1/2025.findings-emnlp.618
Bibkey:
Cite (ACL):
Shuyang Hao, Yiwei Wang, Bryan Hooi, Jun Liu, Muhao Chen, Zi Huang, and Yujun Cai. 2025. Making Every Step Effective: Jailbreaking Large Vision-Language Models Through Hierarchical KV Equalization. In Findings of the Association for Computational Linguistics: EMNLP 2025, pages 11528–11543, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
Making Every Step Effective: Jailbreaking Large Vision-Language Models Through Hierarchical KV Equalization (Hao et al., Findings 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-luhme/2025.findings-emnlp.618.pdf
Checklist:
 2025.findings-emnlp.618.checklist.pdf