2025
pdf
bib
abs
Defensive Prompt Patch: A Robust and Generalizable Defense of Large Language Models against Jailbreak Attacks
Chen Xiong
|
Xiangyu Qi
|
Pin-Yu Chen
|
Tsung-Yi Ho
Findings of the Association for Computational Linguistics: ACL 2025
Safety, security, and compliance are essential requirements when aligning large language models (LLMs). However, many seemingly aligned LLMs are soon shown to be susceptible to jailbreak attacks. These attacks aim to circumvent the models’ safety guardrails and security mechanisms by introducing jailbreak prompts into malicious queries. In response to these challenges, this paper introduces **Defensive Prompt Patch** (DPP), a novel prompt-based defense mechanism specifically designed to protect LLMs against such sophisticated jailbreak strategies. Unlike previous approaches, which have often compromised the utility of the model for the sake of safety, DPP is designed to achieve a minimal Attack Success Rate (ASR) while preserving the high utility of LLMs. Our method uses strategically designed suffix prompts that effectively thwart a wide range of standard and adaptive jailbreak techniques. Empirical results conducted on Llama-2-7B-Chat and Mistral-7B-Instruct-v0.2 demonstrate the robustness and adaptability of DPP, showing significant reductions in ASR with negligible impact on utility. Our approach not only outperforms existing defense strategies in balancing safety and functionality, but also provides a scalable and robust solution to various LLM platforms.
pdf
bib
abs
Libra-Leaderboard: Towards Responsible AI through a Balanced Leaderboard of Safety and Capability
Haonan Li
|
Xudong Han
|
Zenan Zhai
|
Honglin Mu
|
Hao Wang
|
Zhenxuan Zhang
|
Yilin Geng
|
Shom Lin
|
Renxi Wang
|
Artem Shelmanov
|
Xiangyu Qi
|
Yuxia Wang
|
Donghai Hong
|
Youliang Yuan
|
Meng Chen
|
Haoqin Tu
|
Fajri Koto
|
Cong Zeng
|
Tatsuki Kuribayashi
|
Rishabh Bhardwaj
|
Bingchen Zhao
|
Yawen Duan
|
Yi Liu
|
Emad A. Alghamdi
|
Yaodong Yang
|
Yinpeng Dong
|
Soujanya Poria
|
Pengfei Liu
|
Zhengzhong Liu
|
Hector Xuguang Ren
|
Eduard Hovy
|
Iryna Gurevych
|
Preslav Nakov
|
Monojit Choudhury
|
Timothy Baldwin
Proceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies (System Demonstrations)
As large language models (LLMs) continue to evolve, leaderboards play a significant role in steering their development. Existing leaderboards often prioritize model capabilities while overlooking safety concerns, leaving a significant gap in responsible AI development. To address this gap, we introduce Libra-Leaderboard, a comprehensive framework designed to rank LLMs through a balanced evaluation of performance and safety. Combining a dynamic leaderboard with an interactive LLM arena, Libra-Leaderboard encourages the joint optimization of capability and safety. Unlike traditional approaches that average performance and safety metrics, Libra-Leaderboard uses a distance-to-optimal-score method to calculate the overall rankings. This approach incentivizes models to achieve a balance rather than excelling in one dimension at the expense of some other ones. In the first release, Libra-Leaderboard evaluates 26 mainstream LLMs from 14 leading organizations, identifying critical safety challenges even in state-of-the-art models.