Tia Nasir
2025
The Structural Safety Generalization Problem
Julius Broomfield
|
Tom Gibbs
|
George Ingebretsen
|
Ethan Kosak-Hine
|
Tia Nasir
|
Jason Zhang
|
Reihaneh Iranmanesh
|
Sara Pieri
|
Reihaneh Rabbany
|
Kellin Pelrine
Findings of the Association for Computational Linguistics: ACL 2025
LLM jailbreaks are a widespread safety challenge. Given this problem has not yet been tractable, we suggest targeting a key failure mechanism: the failure of safety to generalize across semantically equivalent inputs. We further focus the target by requiring desirable tractability properties of attacks to study: explainability, transferability between models, and transferability between goals. We perform red-teaming within this framework by uncovering new vulnerabilities to multi-turn, multi-image, and translation-based attacks. These attacks are semantically equivalent by our design to their single-turn, single-image, or untranslated counterparts, enabling systematic comparisons; we show that the different structures yield different safety outcomes. We then demonstrate the potential for this framework to enable new defenses by proposing a Structure Rewriting Guardrail, which converts an input to a structure more conducive to safety assessment. This guardrail significantly improves refusal of harmful inputs, without over-refusing benign ones. Thus, by framing this intermediate challenge—more tractable than universal defenses but essential for long-term safety—we highlight a critical milestone for AI safety research.
Search
Fix author
Co-authors
- Julius Broomfield 1
- Tom Gibbs 1
- George Ingebretsen 1
- Reihaneh Iranmanesh 1
- Ethan Kosak-Hine 1
- show all...