Great Memory, Shallow Reasoning: Limits of kNN-LMs

Shangyi Geng, Wenting Zhao, Alexander M Rush


Abstract
K-nearest neighbor language models (kNN-LMs), which integrate retrieval with next-word prediction, have demonstrated strong performance in language modeling as well as some downstream NLP benchmarks. These results have led researchers to argue that models trained on poor quality or outdated data could perform well by employing a kNN extension that has access to a higher-quality datastore. In this work, we ask whether this improved ability to recall information really translates into downstream abilities. We extensively evaluate kNN-LMs on a diverse set of tasks, ranging from sentiment classification and commonsense reasoning to multi-hop reasoning. Results show that kNN-LMs excel at memory-intensive tasks, where utilizing the patterns in the input is sufficient for determining the output, but struggle with reasoning tasks that require integrating multiple pieces of information to derive new knowledge. We further demonstrate through oracle experiments and qualitative analysis that even with perfect retrieval, kNN-LMs still fail to determine the correct answers, placing an upper bound on their reasoning performance.
Anthology ID:
2025.naacl-short.40
Volume:
Proceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 2: Short Papers)
Month:
April
Year:
2025
Address:
Albuquerque, New Mexico
Editors:
Luis Chiruzzo, Alan Ritter, Lu Wang
Venue:
NAACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
471–482
Language:
URL:
https://preview.aclanthology.org/Ingest-2025-COMPUTEL/2025.naacl-short.40/
DOI:
Bibkey:
Cite (ACL):
Shangyi Geng, Wenting Zhao, and Alexander M Rush. 2025. Great Memory, Shallow Reasoning: Limits of kNN-LMs. In Proceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 2: Short Papers), pages 471–482, Albuquerque, New Mexico. Association for Computational Linguistics.
Cite (Informal):
Great Memory, Shallow Reasoning: Limits of kNN-LMs (Geng et al., NAACL 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/Ingest-2025-COMPUTEL/2025.naacl-short.40.pdf