Jiaxun Zhang
2025
SafeScientist: Enhancing AI Scientist Safety for Risk-Aware Scientific Discovery
Kunlun Zhu
|
Jiaxun Zhang
|
Ziheng Qi
|
Nuoxing Shang
|
Zijia Liu
|
Peixuan Han
|
Yue Su
|
Haofei Yu
|
Jiaxuan You
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Recent advancements in large language model (LLM) agents have significantly accelerated scientific discovery automation, yet concurrently raised critical ethical and safety concerns. To systematically address these challenges, we introduce **SafeScientist**, an innovative AI scientist framework explicitly designed to enhance safety and ethical responsibility in AI-driven scientific exploration. SafeScientist proactively refuses ethically inappropriate or high-risk tasks and rigorously emphasizes safety throughout the research process. To achieve comprehensive safety oversight, we integrate multiple defensive mechanisms, including prompt monitoring, agent-collaboration monitoring, tool-use monitoring, and an ethical reviewer component. Complementing SafeScientist, we propose **SciSafetyBench** , a novel benchmark specifically designed to evaluate AI safety in scientific contexts, comprising 240 high-risk scientific tasks across 6 domains, alongside 30 specially designed scientific tools and 120 tool-related risk tasks. Extensive experiments demonstrate that SafeScientist significantly improves safety performance by 35% compared to traditional AI scientist frameworks, without compromising scientific output quality. Additionally, we rigorously validate the robustness of our safety pipeline against diverse adversarial attack methods, further confirming the effectiveness of our integrated approach. The code and data will be available at https://github.com/ulab-uiuc/SafeScientist.**Warning**: this paper contains example data that may be offensive or harmful.
TinyScientist: An Interactive, Extensible, and Controllable Framework for Building Research Agents
Haofei Yu
|
Keyang Xuan
|
Fenghai Li
|
Kunlun Zhu
|
Zijie Lei
|
Jiaxun Zhang
|
Ziheng Qi
|
Kyle Richardson
|
Jiaxuan You
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing: System Demonstrations
Automatic research with Large Language Models (LLMs) is rapidly gaining importance, driving the development of increasingly complex workflows involving multi-agent systems, planning, tool usage, code execution, and human-agent interaction to accelerate research processes. However, as more researchers and developers begin to use and build upon these tools and platforms, the complexity and difficulty of extending and maintaining such agentic workflows have become a significant challenge, particularly as algorithms and architectures continue to advance. To address this growing complexity, TinyScientist identifies the essential components of the automatic research workflow and proposes an interactive, extensible, and controllable framework that adapts easily to new tools and supports iterative growth. We provide an open-source codebase, an interactive web demonstration, and a PyPI Python package to make state-of-the-art auto-research pipelines broadly accessible to every researcher and developer.
Search
Fix author
Co-authors
- Ziheng Qi 2
- Jiaxuan You 2
- Haofei Yu 2
- Kunlun Zhu 2
- Peixuan Han 1
- show all...