Gradient-Based Language Model Red Teaming

Nevan Wichers, Carson Denison, Ahmad Beirami


Abstract
Red teaming is a common strategy for identifying weaknesses in generative language models (LMs) by producing adversarial prompts that trigger models to generate unsafe responses. Red teaming is instrumental for both model alignment and evaluation, but is labor-intensive and difficult to scale when done by humans. In this paper, we present Gradient-Based Red Teaming (GBRT), a novel red teaming method for automatically generating diverse prompts that are likely to cause an LM to output unsafe responses. GBRT is a form of prompt learning, trained by scoring an LM response with a safety classifier and then backpropagating through the frozen safety classifier and LM to update the prompt. To improve the coherence of input prompts, we introduce two variants that add a realism loss and fine-tune a pretrained model to generate the prompts instead of learning the prompts directly. Our experiments show that GBRT is more effective at finding prompts that trigger an LM to generate unsafe responses than a strong reinforcement learning-based red teaming approach and works even when the LM has been fine-tuned to produce safer outputs.
Anthology ID:
2024.eacl-long.175
Volume:
Proceedings of the 18th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long Papers)
Month:
March
Year:
2024
Address:
St. Julian’s, Malta
Editors:
Yvette Graham, Matthew Purver
Venue:
EACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
2862–2881
Language:
URL:
https://aclanthology.org/2024.eacl-long.175
DOI:
Bibkey:
Cite (ACL):
Nevan Wichers, Carson Denison, and Ahmad Beirami. 2024. Gradient-Based Language Model Red Teaming. In Proceedings of the 18th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2862–2881, St. Julian’s, Malta. Association for Computational Linguistics.
Cite (Informal):
Gradient-Based Language Model Red Teaming (Wichers et al., EACL 2024)
Copy Citation:
PDF:
https://preview.aclanthology.org/nschneid-patch-2/2024.eacl-long.175.pdf
Video:
 https://preview.aclanthology.org/nschneid-patch-2/2024.eacl-long.175.mp4