Fan Mo
2025
ToolSafety: A Comprehensive Dataset for Enhancing Safety in LLM-Based Agent Tool Invocations
Yuejin Xie
|
Youliang Yuan
|
Wenxuan Wang
|
Fan Mo
|
Jianmin Guo
|
Pinjia He
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
LLMs are evolving into assistants that leverage tools, significantly expanding their capabilities but also introducing critical safety risks. Current models exhibit notable vulnerabilities, particularly in maintaining safety during multi-step tool interactions and in scenarios involving indirect harm. This paper introduces ToolSafety, a safety fine-tuning dataset designed to address these limitations. ToolSafety comprises 5,668 direct harm samples, 4,311 indirect harm samples, and 4,311 multi-step samples. Key features include support for multi-step safety through synthesized trajectories and realistic, context-aware sample generation. We fine-tuned LLaMA3.1-8B-Instruct and Qwen2.5-7B-Instruct using ToolSafety. Experimental results demonstrate that these models effectively maintain safety in multi-step and indirect harm scenarios. Further analysis into superficial alignment across different decoding strategies, languages, and jailbreak prompts indicates that while some risks persist, the issue is less severe than in multi-step settings. Overall, our approach significantly improves safety across various scenarios with small impact on helpfulness, positioning ToolSafety as a valuable resource for building safer tool-using AI systems.
Governance in Motion: Co-evolution of Constitutions and AI models for Scalable Safety
Chenhao Huang
|
Ziyu Shen
|
Yicong Ren
|
Huiyuan Zheng
|
Jiazheng Zhang
|
Mingxu Chai
|
Ming Zhang
|
Shihan Dou
|
Fan Mo
|
Jie Shi
|
Tao Gui
|
Qi Zhang
|
Xuanjing Huang
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Aligning large language models (LLMs) with human preferences is a central challenge for building reliable AI systems. Most existing alignment approaches rely on static signals, such as predefined principles or offline human annotations to guide model behavior toward a fixed approximation of human preferences. However, LLMs can exhibit distributional drift during training, and static alignment mechanisms lack the capacity to adaptively correct misaligned behaviors as they emerge. To address this limitation, we develop a two-stage framework that enables dynamic and continuous alignment. In the first stage, a constitution is continually revised based on observed model behaviors, and models are trained to comply with these evolving principles. In the second stage, this learned constitution is used to guide reinforcement learning, encouraging the model to align with the updated normative signals. We refer to this framework as COCOA: Co-evolution of Constitutions and AI Models. We show that COCOA enables a 7B model to greatly improve safety—raising StrongReject score from 0.741 to 0.935 and Safe-RLHF accuracy from 77.76% to 90.64% without human annotations, reaching performance close to much larger state-of-the-art models.
Search
Fix author
Co-authors
- Mingxu Chai 1
- Shihan Dou 1
- Tao Gui 1
- Jianmin Guo 1
- Pinjia He 1
- show all...