Emanuele Borgonovo
2025
No for Some, Yes for Others: Persona Prompts and Other Sources of False Refusal in Language Models
Flor Miriam Plaza-del-Arco
|
Paul Röttger
|
Nino Scherrer
|
Emanuele Borgonovo
|
Elmar Plischke
|
Dirk Hovy
Proceedings of the 9th Widening NLP Workshop
Large language models (LLMs) are increasingly integrated into our daily lives and personalized. However, LLM personalization might also increase unintended side effects. Recent work suggests that persona prompting can lead models to falsely refuse user requests. However, no work has fully quantified the extent of this issue. To address this gap, we measure the impact of 15 sociodemographic personas (based on gender, race, religion, and disability) on false refusal. To control for other factors, we also test 16 different models, 3 tasks (Natural Language Inference, politeness, and offensiveness classification), and nine prompt paraphrases. We propose a Monte Carlo-based method to quantify this issue in a sample-efficient manner. Our results show that as models become more capable, personas impact the refusal rate less. However, we find that the choice of model significantly influence false refusals, especially in sensitive content tasks. The impact of certain sociodemographic personas further increases the false refusal effect in some models, which suggests that there are underlying biases in the alignment strategies or safety mechanisms.