Julius Broomfield
2025
Jailbreak-Tuning: Models Efficiently Learn Jailbreak Susceptibility
Brendan Murphy
|
Dillon Bowen
|
Shahrad Mohammadzadeh
|
Tom Tseng
|
Julius Broomfield
|
Adam Gleave
|
Kellin Pelrine
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
AI systems are rapidly advancing in capability, and frontier model developers broadly acknowledge the need for safeguards against serious misuse. However, this paper demonstrates that fine-tuning, whether via open weights or closed fine-tuning APIs, can produce helpful-only models with safeguards destroyed. In contrast to prior work which is blocked by modern moderation systems or achieved only partial removal of safeguards or degraded output quality, our jailbreak-tuning method teaches models to generate detailed, high-quality responses to arbitrary harmful requests. For example, OpenAI, Google, and Anthropic models will fully comply with requests for CBRN assistance, executing cyberattacks, and other criminal activity. We further show that backdoors can increase not only the stealth but also the severity of attacks. Stronger jailbreak prompts become even more effective in fine-tuning attacks, linking attacks and potentially defenses in the input and weight spaces. Not only are current models vulnerable, more recent ones also appear to be becoming even more vulnerable to these attacks, underscoring the urgent need for tamper-resistant safeguards. Until such safeguards are discovered, companies and policymakers should view the release of any fine-tunable model as simultaneously releasing its evil twin: equally capable as the original model, and usable for any malicious purpose within its capabilities.
The Structural Safety Generalization Problem
Julius Broomfield
|
Tom Gibbs
|
George Ingebretsen
|
Ethan Kosak-Hine
|
Tia Nasir
|
Jason Zhang
|
Reihaneh Iranmanesh
|
Sara Pieri
|
Reihaneh Rabbany
|
Kellin Pelrine
Findings of the Association for Computational Linguistics: ACL 2025
LLM jailbreaks are a widespread safety challenge. Given this problem has not yet been tractable, we suggest targeting a key failure mechanism: the failure of safety to generalize across semantically equivalent inputs. We further focus the target by requiring desirable tractability properties of attacks to study: explainability, transferability between models, and transferability between goals. We perform red-teaming within this framework by uncovering new vulnerabilities to multi-turn, multi-image, and translation-based attacks. These attacks are semantically equivalent by our design to their single-turn, single-image, or untranslated counterparts, enabling systematic comparisons; we show that the different structures yield different safety outcomes. We then demonstrate the potential for this framework to enable new defenses by proposing a Structure Rewriting Guardrail, which converts an input to a structure more conducive to safety assessment. This guardrail significantly improves refusal of harmful inputs, without over-refusing benign ones. Thus, by framing this intermediate challenge—more tractable than universal defenses but essential for long-term safety—we highlight a critical milestone for AI safety research.
Search
Fix author
Co-authors
- Kellin Pelrine 2
- Dillon Bowen 1
- Tom Gibbs 1
- Adam Gleave 1
- George Ingebretsen 1
- show all...