Defending LLMs against Jailbreaking Attacks via Backtranslation

Yihan Wang, Zhouxing Shi, Andrew Bai, Cho-Jui Hsieh


Abstract
Although many large language models (LLMs) have been trained to refuse harmful requests, they are still vulnerable to jailbreaking attacks which rewrite the original prompt to conceal its harmful intent. In this paper, we propose a new method for defending LLMs against jailbreaking attacks by “backtranslation”. Specifically, given an initial response generated by the target LLM from an input prompt, our backtranslation prompts a language model to infer an input prompt that can lead to the response. The inferred prompt is called the backtranslated prompt which tends to reveal the actual intent of the original prompt, since it is generated based on the LLM’s response and not directly manipulated by the attacker. We then run the target LLM again on the backtranslated prompt, and we refuse the original prompt if the model refuses the backtranslated prompt. We explain that the proposed defense provides several benefits on its effectiveness and efficiency. We empirically demonstrate that our defense significantly outperforms the baselines, in the cases that are hard for the baselines, and our defense also has little impact on the generation quality for benign input prompts. Our implementation is based on our library for LLM jailbreaking defense algorithms at https://github.com/YihanWang617/llm-jailbreaking-defense, and the code for reproducing our experiments is available at https://github.com/YihanWang617/LLM-Jailbreaking-Defense-Backtranslation.
Anthology ID:
2024.findings-acl.948
Volume:
Findings of the Association for Computational Linguistics: ACL 2024
Month:
August
Year:
2024
Address:
Bangkok, Thailand
Editors:
Lun-Wei Ku, Andre Martins, Vivek Srikumar
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
16031–16046
Language:
URL:
https://aclanthology.org/2024.findings-acl.948
DOI:
10.18653/v1/2024.findings-acl.948
Bibkey:
Cite (ACL):
Yihan Wang, Zhouxing Shi, Andrew Bai, and Cho-Jui Hsieh. 2024. Defending LLMs against Jailbreaking Attacks via Backtranslation. In Findings of the Association for Computational Linguistics: ACL 2024, pages 16031–16046, Bangkok, Thailand. Association for Computational Linguistics.
Cite (Informal):
Defending LLMs against Jailbreaking Attacks via Backtranslation (Wang et al., Findings 2024)
Copy Citation:
PDF:
https://preview.aclanthology.org/autopr/2024.findings-acl.948.pdf