Sitiporn Lim


2025

pdf bib
Shortcut Learning in Safety: The Impact of Keyword Bias in Safeguards
Panuthep Tasawong | Napat Laosaengpha | Wuttikorn Ponwitayarat | Sitiporn Lim | Potsawee Manakul | Samuel Cahyawijaya | Can Udomcharoenchaikit | Peerat Limkonchotiwat | Ekapol Chuangsuwanich | Sarana Nutanong
Proceedings of the The First Workshop on LLM Security (LLMSEC)

This paper investigates the problem of shortcut learning in safety guardrails for large language models (LLMs). It reveals that current safeguard models often rely excessively on superficial cues, such as specific keywords that are spuriously correlated with training labels, rather than genuinely understanding the input’s semantics or intent. As a result, their performance degrades significantly when there is a shift in keyword distribution. The paper also examines the impact of reducing shortcut reliance, showing that merely minimizing shortcut influence is insufficient. To build robust safeguard models, it is equally crucial to promote the use of intended features.