Adaptive Attacks Break Defenses Against Indirect Prompt Injection Attacks on LLM Agents

Qiusi Zhan, Richard Fang, Henil Shalin Panchal, Daniel Kang


Abstract
Large Language Model (LLM) agents exhibit remarkable performance across diverse applications by using external tools to interact with environments. However, integrating external tools introduces security risks, such as indirect prompt injection (IPI) attacks. Despite defenses designed for IPI attacks, their robustness remains questionable due to insufficient testing against adaptive attacks.In this paper, we evaluate eight different defenses and bypass all of them using adaptive attacks, consistently achieving an attack success rate of over 50%.This reveals critical vulnerabilities in current defenses. Our research underscores the need for adaptive attack evaluation when designing defenses to ensure robustness and reliability.The code is available at https://github.com/uiuc-kang-lab/AdaptiveAttackAgent.
Anthology ID:
2025.findings-naacl.395
Volume:
Findings of the Association for Computational Linguistics: NAACL 2025
Month:
April
Year:
2025
Address:
Albuquerque, New Mexico
Editors:
Luis Chiruzzo, Alan Ritter, Lu Wang
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
7101–7117
Language:
URL:
https://preview.aclanthology.org/fix-sig-urls/2025.findings-naacl.395/
DOI:
Bibkey:
Cite (ACL):
Qiusi Zhan, Richard Fang, Henil Shalin Panchal, and Daniel Kang. 2025. Adaptive Attacks Break Defenses Against Indirect Prompt Injection Attacks on LLM Agents. In Findings of the Association for Computational Linguistics: NAACL 2025, pages 7101–7117, Albuquerque, New Mexico. Association for Computational Linguistics.
Cite (Informal):
Adaptive Attacks Break Defenses Against Indirect Prompt Injection Attacks on LLM Agents (Zhan et al., Findings 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/fix-sig-urls/2025.findings-naacl.395.pdf