Outlier Dimensions Encode Task Specific Knowledge

William Rudman, Catherine Chen, Carsten Eickhoff


Abstract
Representations from large language models (LLMs) are known to be dominated by a small subset of dimensions with exceedingly high variance. Previous works have argued that although ablating these outlier dimensions in LLM representations hurts downstream performance, outlier dimensions are detrimental to the representational quality of embeddings. In this study, we investigate how fine-tuning impacts outlier dimensions and show that 1) outlier dimensions that occur in pre-training persist in fine-tuned models and 2) a single outlier dimension can complete downstream tasks with a minimal error rate. Our results suggest that outlier dimensions can encode crucial task-specific knowledge and that the value of a representation in a single outlier dimension drives downstream model decisions.
Anthology ID:
2023.emnlp-main.901
Volume:
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing
Month:
December
Year:
2023
Address:
Singapore
Editors:
Houda Bouamor, Juan Pino, Kalika Bali
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
14596–14605
Language:
URL:
https://aclanthology.org/2023.emnlp-main.901
DOI:
10.18653/v1/2023.emnlp-main.901
Bibkey:
Cite (ACL):
William Rudman, Catherine Chen, and Carsten Eickhoff. 2023. Outlier Dimensions Encode Task Specific Knowledge. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 14596–14605, Singapore. Association for Computational Linguistics.
Cite (Informal):
Outlier Dimensions Encode Task Specific Knowledge (Rudman et al., EMNLP 2023)
Copy Citation:
PDF:
https://preview.aclanthology.org/nschneid-patch-5/2023.emnlp-main.901.pdf
Video:
 https://preview.aclanthology.org/nschneid-patch-5/2023.emnlp-main.901.mp4