Kate Cook
2023
Performance and Risk Trade-offs for Multi-word Text Prediction at Scale
Aniket Vashishtha
|
S Sai Prasad
|
Payal Bajaj
|
Vishrav Chaudhary
|
Kate Cook
|
Sandipan Dandapat
|
Sunayana Sitaram
|
Monojit Choudhury
Findings of the Association for Computational Linguistics: EACL 2023
Large Language Models such as GPT-3 are well-suited for text prediction tasks, which can help and delight users during text composition. LLMs are known to generate ethically inappropriate predictions even for seemingly innocuous contexts. Toxicity detection followed by filtering is a common strategy for mitigating the harm from such predictions. However, as we shall argue in this paper, in the context of text prediction, it is not sufficient to detect and filter toxic content. One also needs to ensure factual correctness and group-level fairness of the predictions; failing to do so can make the system ineffective and nonsensical at best, and unfair and detrimental to the users at worst. We discuss the gaps and challenges of toxicity detection approaches - from blocklist-based approaches to sophisticated state-of-the-art neural classifiers - by evaluating them on the text prediction task for English against a manually crafted CheckList of harms targeted at different groups and different levels of severity.
Search
Co-authors
- Aniket Vashishtha 1
- S Sai Prasad 1
- Payal Bajaj 1
- Vishrav Chaudhary 1
- Sandipan Dandapat 1
- show all...