Long-Tail Crisis in Nearest Neighbor Language Models
Yuto Nishida, Makoto Morishita, Hiroyuki Deguchi, Hidetaka Kamigaito, Taro Watanabe
Abstract
The k-nearest-neighbor language model (kNN-LM), one of the retrieval-augmented language models, improves the perplexity for given text by directly accessing a large datastore built from any text data during inference.A widely held hypothesis for the success of kNN-LM is that its explicit memory, i.e., the datastore, enhances predictions for long-tail phenomena.However, prior works have primarily shown its ability to retrieve long-tail contexts, leaving the model’s performance remain underexplored in estimating the probabilities of long-tail target tokens during inference.In this paper, we investigate the behavior of kNN-LM on low-frequency tokens, examining prediction probability, retrieval accuracy, and token distribution in the datastore.Our experimental results reveal that kNN-LM does not improve prediction performance for low-frequency tokens but mainly benefits high-frequency tokens regardless of long-tail contexts in the datastore.- Anthology ID:
- 2025.findings-naacl.331
- Volume:
- Findings of the Association for Computational Linguistics: NAACL 2025
- Month:
- April
- Year:
- 2025
- Address:
- Albuquerque, New Mexico
- Editors:
- Luis Chiruzzo, Alan Ritter, Lu Wang
- Venue:
- Findings
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 5965–5978
- Language:
- URL:
- https://preview.aclanthology.org/Ingest-2025-COMPUTEL/2025.findings-naacl.331/
- DOI:
- Cite (ACL):
- Yuto Nishida, Makoto Morishita, Hiroyuki Deguchi, Hidetaka Kamigaito, and Taro Watanabe. 2025. Long-Tail Crisis in Nearest Neighbor Language Models. In Findings of the Association for Computational Linguistics: NAACL 2025, pages 5965–5978, Albuquerque, New Mexico. Association for Computational Linguistics.
- Cite (Informal):
- Long-Tail Crisis in Nearest Neighbor Language Models (Nishida et al., Findings 2025)
- PDF:
- https://preview.aclanthology.org/Ingest-2025-COMPUTEL/2025.findings-naacl.331.pdf