Not a nuisance but a useful heuristic: Outlier dimensions favor frequent tokens in language models

Iuri Macocco, Nora Graichen, Gemma Boleda, Marco Baroni


Abstract
We study last-layer outlier dimensions, i.e. dimensions that display extreme activations for the majority of inputs. We show that outlier dimensions arise in many different modern language models, and trace their function back to the heuristic of constantly predicting frequent words. We further show how a model can block this heuristic when it is not contextually appropriate, by assigning a counterbalancing weight mass to the remaining dimensions, and we investigate which model parameters boost outlier dimensions and when they arise during training. We conclude that outlier dimensions are a specialized mechanism discovered by many distinct models to implement a useful token prediction heuristic.
Anthology ID:
2025.blackboxnlp-1.6
Volume:
Proceedings of the 8th BlackboxNLP Workshop: Analyzing and Interpreting Neural Networks for NLP
Month:
November
Year:
2025
Address:
Suzhou, China
Editors:
Yonatan Belinkov, Aaron Mueller, Najoung Kim, Hosein Mohebbi, Hanjie Chen, Dana Arad, Gabriele Sarti
Venues:
BlackboxNLP | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
109–136
Language:
URL:
https://preview.aclanthology.org/ingest-emnlp/2025.blackboxnlp-1.6/
DOI:
Bibkey:
Cite (ACL):
Iuri Macocco, Nora Graichen, Gemma Boleda, and Marco Baroni. 2025. Not a nuisance but a useful heuristic: Outlier dimensions favor frequent tokens in language models. In Proceedings of the 8th BlackboxNLP Workshop: Analyzing and Interpreting Neural Networks for NLP, pages 109–136, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
Not a nuisance but a useful heuristic: Outlier dimensions favor frequent tokens in language models (Macocco et al., BlackboxNLP 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-emnlp/2025.blackboxnlp-1.6.pdf