Sumanth Dathathri
2021
Challenges in Detoxifying Language Models
Johannes Welbl
|
Amelia Glaese
|
Jonathan Uesato
|
Sumanth Dathathri
|
John Mellor
|
Lisa Anne Hendricks
|
Kirsty Anderson
|
Pushmeet Kohli
|
Ben Coppin
|
Po-Sen Huang
Findings of the Association for Computational Linguistics: EMNLP 2021
Large language models (LM) generate remarkably fluent text and can be efficiently adapted across NLP tasks. Measuring and guaranteeing the quality of generated text in terms of safety is imperative for deploying LMs in the real world; to this end, prior work often relies on automatic evaluation of LM toxicity. We critically discuss this approach, evaluate several toxicity mitigation strategies with respect to both automatic and human evaluation, and analyze consequences of toxicity mitigation in terms of model bias and LM quality. We demonstrate that while basic intervention strategies can effectively optimize previously established automatic metrics on the REALTOXICITYPROMPTS dataset, this comes at the cost of reduced LM coverage for both texts about, and dialects of, marginalized groups. Additionally, we find that human raters often disagree with high automatic toxicity scores after strong toxicity reduction interventions—highlighting further the nuances involved in careful evaluation of LM toxicity.
2020
Plug-and-Play Conversational Models
Andrea Madotto
|
Etsuko Ishii
|
Zhaojiang Lin
|
Sumanth Dathathri
|
Pascale Fung
Findings of the Association for Computational Linguistics: EMNLP 2020
There has been considerable progress made towards conversational models that generate coherent and fluent responses; however, this often involves training large language models on large dialogue datasets, such as Reddit. These large conversational models provide little control over the generated responses, and this control is further limited in the absence of annotated conversational datasets for attribute specific generation that can be used for fine-tuning the model. In this paper, we first propose and evaluate plug-and-play methods for controllable response generation, which does not require dialogue specific datasets and does not rely on fine-tuning a large model. While effective, the decoding procedure induces considerable computational overhead, rendering the conversational model unsuitable for interactive usage. To overcome this, we introduce an approach that does not require further computation at decoding time, while also does not require any fine-tuning of a large language model. We demonstrate, through extensive automatic and human evaluation, a high degree of control over the generated conversational responses with regard to multiple desired attributes, while being fluent.
Search
Co-authors
- Andrea Madotto 1
- Etsuko Ishii 1
- Zhaojiang Lin 1
- Pascale Fung 1
- Johannes Welbl 1
- show all...