Zhan Qin
2025
DELMAN: Dynamic Defense Against Large Language Model Jailbreaking with Model Editing
Yi Wang
|
Fenghua Weng
|
Sibei Yang
|
Zhan Qin
|
Minlie Huang
|
Wenjie Wang
Findings of the Association for Computational Linguistics: ACL 2025
Large Language Models (LLMs) are widely applied in decision making, but their deployment is threatened by jailbreak attacks, where adversarial users manipulate model behavior to bypass safety measures. Existing defense mechanisms, such as safety fine-tuning and model editing, either require extensive parameter modifications or lack precision, leading to performance degradation on general tasks, which is unsuitable to post-deployment safety alignment. To address these challenges, we propose DELMAN (**D**ynamic **E**diting for **L**L**M**s J**A**ilbreak Defe**N**se), a novel approach leveraging direct model editing for precise, dynamic protection against jailbreak attacks. DELMAN directly updates a minimal set of relevant parameters to neutralize harmful behaviors while preserving the model’s utility. To avoid triggering a safe response in benign context, we incorporate KL-divergence regularization to ensure the updated model remains consistent with the original model when processing benign queries. Experimental results demonstrate that DELMAN outperforms baseline methods in mitigating jailbreak attacks while preserving the model’s utility, and adapts seamlessly to new attack instances, providing a practical and efficient solution for post-deployment model protection.
Don’t Say No: Jailbreaking LLM by Suppressing Refusal
Yukai Zhou
|
Jian Lou
|
Zhijie Huang
|
Zhan Qin
|
Sibei Yang
|
Wenjie Wang
Findings of the Association for Computational Linguistics: ACL 2025
Ensuring the safety alignment of Large Language Models (LLMs) is critical for generating responses consistent with human values. However, LLMs remain vulnerable to jailbreaking attacks, where carefully crafted prompts manipulate them into producing toxic content. One category of such attacks reformulates the task as an optimization problem, aiming to elicit affirmative responses from the LLM. However, these methods heavily rely on predefined objectionable behaviors, limiting their effectiveness and adaptability to diverse harmful queries. In this study, we first identify why the vanilla target loss is suboptimal and then propose enhancements to the loss objective. We introduce DSN (Don’t Say No) attack, which combines a cosine decay schedule method with refusal suppression to achieve higher success rates. Extensive experiments demonstrate that DSN outperforms baseline attacks and achieves state-of-the-art attack success rates (ASR). DSN also shows strong universality and transferability to unseen datasets and black-box models.
2024
Cross-modality Information Check for Detecting Jailbreaking in Multimodal Large Language Models
Yue Xu
|
Xiuyuan Qi
|
Zhan Qin
|
Wenjie Wang
Findings of the Association for Computational Linguistics: EMNLP 2024
Search
Fix author
Co-authors
- Wenjie Wang 3
- Sibei Yang 2
- Minlie Huang 1
- Zhijie Huang 1
- Jian Lou 1
- show all...