Parameter-Efficient Detoxification with Contrastive Decoding

Tong Niu, Caiming Xiong, Yingbo Zhou, Semih Yavuz


Abstract
The field of natural language generation has witnessed significant advancements in recent years, including the development of controllable text generation techniques. However, controlling the attributes of the generated text remains a challenge, especially when aiming to avoid undesirable behavior such as toxicity. In this work, we introduce Detoxification Generator (DETOXIGEN), an inference-time algorithm that steers the generation away from unwanted styles. DETOXIGEN is an ensemble of a pre-trained language model (generator) and a detoxifier. The detoxifier is trained intentionally on the toxic data representative of the undesirable attribute, encouraging it to generate text in that style exclusively. During the actual generation, we use the trained detoxifier to produce undesirable tokens for the generator to contrast against at each decoding step. This approach directly informs the generator to avoid generating tokens that the detoxifier considers highly likely. We evaluate DETOXIGEN on the commonly used REALTOXICITYPROMPTS benchmark (Gehman et al., 2020) with various language models as generators. We find that it significantly outperforms previous approaches in detoxification metrics while not compromising on the generation quality. Moreover, the detoxifier is obtained by soft prompt-tuning using the same backbone language model as the generator. Hence, DETOXIGEN requires only a tiny amount of extra weights from the virtual tokens of the detoxifier to be loaded into GPU memory while decoding, making it a promising lightweight, practical, and parameter-efficient detoxification strategy.
Anthology ID:
2024.hucllm-1.3
Volume:
Proceedings of the 1st Human-Centered Large Language Modeling Workshop
Month:
August
Year:
2024
Address:
TBD
Editors:
Nikita Soni, Lucie Flek, Ashish Sharma, Diyi Yang, Sara Hooker, H. Andrew Schwartz
Venues:
HuCLLM | WS
SIG:
Publisher:
ACL
Note:
Pages:
30–40
Language:
URL:
https://aclanthology.org/2024.hucllm-1.3
DOI:
Bibkey:
Cite (ACL):
Tong Niu, Caiming Xiong, Yingbo Zhou, and Semih Yavuz. 2024. Parameter-Efficient Detoxification with Contrastive Decoding. In Proceedings of the 1st Human-Centered Large Language Modeling Workshop, pages 30–40, TBD. ACL.
Cite (Informal):
Parameter-Efficient Detoxification with Contrastive Decoding (Niu et al., HuCLLM-WS 2024)
Copy Citation:
PDF:
https://preview.aclanthology.org/nschneid-patch-4/2024.hucllm-1.3.pdf