Shenghong Dai
2025
AGrail: A Lifelong Agent Guardrail with Effective and Adaptive Safety Detection
Weidi Luo
|
Shenghong Dai
|
Xiaogeng Liu
|
Suman Banerjee
|
Huan Sun
|
Muhao Chen
|
Chaowei Xiao
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
The rapid advancements in Large Language Models (LLMs) have enabled their deployment as autonomous agents for handling complex tasks in dynamic environments. These LLMs demonstrate strong problem-solving capabilities and adaptability to multifaceted scenarios. However, their use as agents also introduces significant risks, including task-specific risks, which are identified by the agent administrator based on the specific task requirements and constraints, and systemic risks, which stem from vulnerabilities in their design or interactions, potentially compromising confidentiality, integrity, or availability (CIA) of information and triggering security risks. Existing defense agencies fail to adaptively and effectively mitigate these risks. In this paper, we propose AGrail, a lifelong agent guardrail to enhance LLM agent safety, which features adaptive safety check generation, effective safety check optimization, and tool compatibility & flexibility. Extensive experiments demonstrate that AGrail not only achieves strong performance against task-specific and system risks but also exhibits transferability across different LLM agents’ tasks.
Search
Fix author
Co-authors
- Suman Banerjee 1
- Muhao Chen 1
- Xiaogeng Liu 1
- Weidi Luo 1
- Huan Sun 1
- show all...
Venues
- acl1