Bypassing LLM Guardrails: An Empirical Analysis of Evasion Attacks against Prompt Injection and Jailbreak Detection Systems

William Hackett, Lewis Birch, Stefan Trawicki, Neeraj Suri, Peter Garraghan


Abstract
Large Language Models (LLMs) guardrail systems are designed to protect against prompt injection and jailbreak attacks. However, they remain vulnerable to evasion techniques. We demonstrate two approaches for bypassing LLM prompt injection and jailbreak detection systems via traditional character injection methods and algorithmic Adversarial Machine Learning (AML) evasion techniques. Through testing against six prominent protection systems, including Microsoft’s Azure Prompt Shield and Meta’s Prompt Guard, we show that both methods can be used to evade detection while maintaining adversarial utility achieving in some instances up to 100% evasion success. Furthermore, we demonstrate that adversaries can enhance Attack Success Rates (ASR) against black-box targets by leveraging word importance ranking computed by offline white-box models. Our findings reveal vulnerabilities within current LLM protection mechanisms and highlight the need for more robust guardrail systems.
Anthology ID:
2025.llmsec-1.8
Volume:
Proceedings of the The First Workshop on LLM Security (LLMSEC)
Month:
August
Year:
2025
Address:
Vienna, Austria
Editor:
Jekaterina Novikova
Venues:
LLMSEC | WS
SIG:
SIGSEC
Publisher:
Association for Computational Linguistics
Note:
Pages:
101–114
Language:
URL:
https://preview.aclanthology.org/transition-to-people-yaml/2025.llmsec-1.8/
DOI:
Bibkey:
Cite (ACL):
William Hackett, Lewis Birch, Stefan Trawicki, Neeraj Suri, and Peter Garraghan. 2025. Bypassing LLM Guardrails: An Empirical Analysis of Evasion Attacks against Prompt Injection and Jailbreak Detection Systems. In Proceedings of the The First Workshop on LLM Security (LLMSEC), pages 101–114, Vienna, Austria. Association for Computational Linguistics.
Cite (Informal):
Bypassing LLM Guardrails: An Empirical Analysis of Evasion Attacks against Prompt Injection and Jailbreak Detection Systems (Hackett et al., LLMSEC 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/transition-to-people-yaml/2025.llmsec-1.8.pdf
Supplementarymaterial:
 2025.llmsec-1.8.SupplementaryMaterial.zip
Supplementarymaterial:
 2025.llmsec-1.8.SupplementaryMaterial.txt
Supplementarymaterial:
 2025.llmsec-1.8.SupplementaryMaterial.zip