Neurocache: Efficient Vector Retrieval for Long-range Language Modeling

Ali Safaya, Deniz Yuret


Abstract
This paper introduces Neurocache, an approach to extend the effective context size of large language models (LLMs) using an external vector cache to store its past states. Like recent vector retrieval approaches, Neurocache uses an efficient k-nearest-neighbor (kNN) algorithm to retrieve relevant past states and incorporate them into the attention process. Neurocache improves upon previous methods by (1) storing compressed states, which reduces cache size; (2) performing a single retrieval operation per token which increases inference speed; and (3) extending the retrieval window to neighboring states, which improves both language modeling and downstream task accuracy. Our experiments show the effectiveness of Neurocache both for models trained from scratch and for pre-trained models such as Llama2-7B and Mistral-7B when enhanced with the cache mechanism. We also compare Neurocache with text retrieval methods and show improvements in single-document question-answering and few-shot learning tasks. We made the source code available under: https://github.com/alisafaya/neurocache
Anthology ID:
2024.naacl-long.50
Volume:
Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)
Month:
June
Year:
2024
Address:
Mexico City, Mexico
Editors:
Kevin Duh, Helena Gomez, Steven Bethard
Venue:
NAACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
870–883
Language:
URL:
https://aclanthology.org/2024.naacl-long.50
DOI:
Bibkey:
Cite (ACL):
Ali Safaya and Deniz Yuret. 2024. Neurocache: Efficient Vector Retrieval for Long-range Language Modeling. In Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers), pages 870–883, Mexico City, Mexico. Association for Computational Linguistics.
Cite (Informal):
Neurocache: Efficient Vector Retrieval for Long-range Language Modeling (Safaya & Yuret, NAACL 2024)
Copy Citation:
PDF:
https://preview.aclanthology.org/nschneid-patch-3/2024.naacl-long.50.pdf
Copyright:
 2024.naacl-long.50.copyright.pdf