Exploring Jailbreak Attacks on LLMs through Intent Concealment and Diversion

Tiehan Cui, Yanxu Mao, Peipei Liu, Congying Liu, Datao You


Abstract
Although large language models (LLMs) have achieved remarkable advancements, their security remains a pressing concern. One major threat is jailbreak attacks, where adversarial prompts bypass model safeguards to generate harmful or objectionable content. Researchers study jailbreak attacks to understand security and robustness of LLMs. However, existing jailbreak attack methods face two main challenges: (1) an excessive number of iterative queries, and (2) poor generalization across models. In addition, recent jailbreak evaluation datasets focus primarily on question-answering scenarios, lacking attention to text generation tasks that require accurate regeneration of toxic content.To tackle these challenges, we propose two contributions:(1) **ICE**, a novel black-box jailbreak method that employs **I**ntent **C**oncealment and div**E**rsion to effectively circumvent security constraints. **ICE** achieves high attack success rates (ASR) with a single query, significantly improving efficiency and transferability across different models.(2) **BiSceneEval**, a comprehensive dataset designed for assessing LLM robustness in question-answering and text-generation tasks. Experimental results demonstrate that **ICE** outperforms existing jailbreak techniques, revealing critical vulnerabilities in current defense mechanisms. Our findings underscore the necessity of a hybrid security strategy that integrates predefined security mechanisms with real-time semantic decomposition to enhance the security of LLMs.
Anthology ID:
2025.findings-acl.1067
Volume:
Findings of the Association for Computational Linguistics: ACL 2025
Month:
July
Year:
2025
Address:
Vienna, Austria
Editors:
Wanxiang Che, Joyce Nabende, Ekaterina Shutova, Mohammad Taher Pilehvar
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
20754–20768
Language:
URL:
https://preview.aclanthology.org/display_plenaries/2025.findings-acl.1067/
DOI:
Bibkey:
Cite (ACL):
Tiehan Cui, Yanxu Mao, Peipei Liu, Congying Liu, and Datao You. 2025. Exploring Jailbreak Attacks on LLMs through Intent Concealment and Diversion. In Findings of the Association for Computational Linguistics: ACL 2025, pages 20754–20768, Vienna, Austria. Association for Computational Linguistics.
Cite (Informal):
Exploring Jailbreak Attacks on LLMs through Intent Concealment and Diversion (Cui et al., Findings 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/display_plenaries/2025.findings-acl.1067.pdf