Short-circuiting Shortcuts: Mechanistic Investigation of Shortcuts in Text Classification

Leon Eshuijs, Shihan Wang, Antske Fokkens


Abstract
Reliance on spurious correlations (shortcuts) has been shown to underlie many of the successes of language models. Previous work focused on identifying the input elements that impact prediction. We investigate how shortcuts are actually processed within the model’s decision-making mechanism.We use actor names in movie reviews as controllable shortcuts with known impact on the outcome. We use mechanistic interpretability methods and identify specific attention heads that focus on shortcuts. These heads gear the model towards a label before processing the complete input, effectively making premature decisions that bypass contextual analysis. Based on these findings, we introduce Head-based Token Attribution (HTA), which traces intermediate decisions back to input tokens. We show that HTA is effective in detecting shortcuts in LLMs and enables targeted mitigation by selectively deactivating shortcut-related attention heads.
Anthology ID:
2025.conll-1.8
Volume:
Proceedings of the 29th Conference on Computational Natural Language Learning
Month:
July
Year:
2025
Address:
Vienna, Austria
Editors:
Gemma Boleda, Michael Roth
Venues:
CoNLL | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
105–125
Language:
URL:
https://preview.aclanthology.org/acl25-workshop-ingestion/2025.conll-1.8/
DOI:
Bibkey:
Cite (ACL):
Leon Eshuijs, Shihan Wang, and Antske Fokkens. 2025. Short-circuiting Shortcuts: Mechanistic Investigation of Shortcuts in Text Classification. In Proceedings of the 29th Conference on Computational Natural Language Learning, pages 105–125, Vienna, Austria. Association for Computational Linguistics.
Cite (Informal):
Short-circuiting Shortcuts: Mechanistic Investigation of Shortcuts in Text Classification (Eshuijs et al., CoNLL 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/acl25-workshop-ingestion/2025.conll-1.8.pdf