Self-Detoxifying Language Models via Toxification Reversal
Chak Tou Leong, Yi Cheng, Jiashuo Wang, Jian Wang, Wenjie Li
Abstract
Language model detoxification aims to minimize the risk of generating offensive or harmful content in pretrained language models (PLMs) for safer deployment. Existing methods can be roughly categorized as finetuning-based and decoding-based. However, the former is often resource-intensive, while the latter relies on additional components and potentially compromises the generation fluency. In this paper, we propose a more lightweight approach that enables the PLM itself to achieve “self-detoxification”. Our method is built upon the observation that prepending a negative steering prompt can effectively induce PLMs to generate toxic content. At the same time, we are inspired by the recent research in the interpretability field, which formulates the evolving contextualized representations within the PLM as an information stream facilitated by the attention layers. Drawing on this idea, we devise a method to identify the toxification direction from the normal generation process to the one prompted with the negative prefix, and then steer the generation to the reversed direction by manipulating the information movement within the attention layers. Experimental results show that our approach, without any fine-tuning or extra components, can achieve comparable performance with state-of-the-art methods.- Anthology ID:
- 2023.emnlp-main.269
- Volume:
- Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing
- Month:
- December
- Year:
- 2023
- Address:
- Singapore
- Editors:
- Houda Bouamor, Juan Pino, Kalika Bali
- Venue:
- EMNLP
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 4433–4449
- Language:
- URL:
- https://aclanthology.org/2023.emnlp-main.269
- DOI:
- 10.18653/v1/2023.emnlp-main.269
- Cite (ACL):
- Chak Tou Leong, Yi Cheng, Jiashuo Wang, Jian Wang, and Wenjie Li. 2023. Self-Detoxifying Language Models via Toxification Reversal. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 4433–4449, Singapore. Association for Computational Linguistics.
- Cite (Informal):
- Self-Detoxifying Language Models via Toxification Reversal (Leong et al., EMNLP 2023)
- PDF:
- https://preview.aclanthology.org/dois-2013-emnlp/2023.emnlp-main.269.pdf