Shortcut Learning in Safety: The Impact of Keyword Bias in Safeguards
Panuthep Tasawong, Napat Laosaengpha, Wuttikorn Ponwitayarat, Sitiporn Lim, Potsawee Manakul, Samuel Cahyawijaya, Can Udomcharoenchaikit, Peerat Limkonchotiwat, Ekapol Chuangsuwanich, Sarana Nutanong
Abstract
This paper investigates the problem of shortcut learning in safety guardrails for large language models (LLMs). It reveals that current safeguard models often rely excessively on superficial cues, such as specific keywords that are spuriously correlated with training labels, rather than genuinely understanding the input’s semantics or intent. As a result, their performance degrades significantly when there is a shift in keyword distribution. The paper also examines the impact of reducing shortcut reliance, showing that merely minimizing shortcut influence is insufficient. To build robust safeguard models, it is equally crucial to promote the use of intended features.- Anthology ID:
- 2025.llmsec-1.14
- Volume:
- Proceedings of the The First Workshop on LLM Security (LLMSEC)
- Month:
- August
- Year:
- 2025
- Address:
- Vienna, Austria
- Editor:
- Jekaterina Novikova
- Venues:
- LLMSEC | WS
- SIG:
- SIGSEC
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 189–197
- Language:
- URL:
- https://preview.aclanthology.org/transition-to-people-yaml/2025.llmsec-1.14/
- DOI:
- Cite (ACL):
- Panuthep Tasawong, Napat Laosaengpha, Wuttikorn Ponwitayarat, Sitiporn Lim, Potsawee Manakul, Samuel Cahyawijaya, Can Udomcharoenchaikit, Peerat Limkonchotiwat, Ekapol Chuangsuwanich, and Sarana Nutanong. 2025. Shortcut Learning in Safety: The Impact of Keyword Bias in Safeguards. In Proceedings of the The First Workshop on LLM Security (LLMSEC), pages 189–197, Vienna, Austria. Association for Computational Linguistics.
- Cite (Informal):
- Shortcut Learning in Safety: The Impact of Keyword Bias in Safeguards (Tasawong et al., LLMSEC 2025)
- PDF:
- https://preview.aclanthology.org/transition-to-people-yaml/2025.llmsec-1.14.pdf