Sticking to the Mean: Detecting Sticky Tokens in Text Embedding Models

Kexin Chen, Dongxia Wang, Yi Liu, Haonan Zhang, Wenhai Wang


Abstract
Despite the widespread use of Transformer-based text embedding models in NLP tasks, surprising “sticky tokens” can undermine the reliability of embeddings. These tokens, when repeatedly inserted into sentences, pull sentence similarity toward a certain value, disrupting the normal distribution of embedding distances and degrading downstream performance. In this paper, we systematically investigate such anomalous tokens, formally defining them and introducing an efficient detection method, Sticky Token Detector (STD), based on sentence and token filtering. Applying STD to 40 checkpoints across 14 model families, we discover a total of 868 sticky tokens. Our analysis reveals that these tokens often originate from special or unused entries in the vocabulary, as well as fragmented subwords from multilingual corpora. Notably, their presence does not strictly correlate with model size or vocabulary size. We further evaluate how sticky tokens affect downstream tasks like clustering and retrieval, observing significant performance drops of up to 50%. Through attention-layer analysis, we show that sticky tokens disproportionately dominate the model’s internal representations, raising concerns about tokenization robustness. Our findings show the need for better tokenization strategies and model design to mitigate the impact of sticky tokens in future text embedding applications.
Anthology ID:
2025.acl-long.1391
Volume:
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Month:
July
Year:
2025
Address:
Vienna, Austria
Editors:
Wanxiang Che, Joyce Nabende, Ekaterina Shutova, Mohammad Taher Pilehvar
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
28660–28681
Language:
URL:
https://preview.aclanthology.org/ingestion-acl-25/2025.acl-long.1391/
DOI:
Bibkey:
Cite (ACL):
Kexin Chen, Dongxia Wang, Yi Liu, Haonan Zhang, and Wenhai Wang. 2025. Sticking to the Mean: Detecting Sticky Tokens in Text Embedding Models. In Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 28660–28681, Vienna, Austria. Association for Computational Linguistics.
Cite (Informal):
Sticking to the Mean: Detecting Sticky Tokens in Text Embedding Models (Chen et al., ACL 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingestion-acl-25/2025.acl-long.1391.pdf