Dongwei Wang
2025
FIER: Fine-Grained and Efficient KV Cache Retrieval for Long-context LLM Inference
Dongwei Wang
|
Zijie Liu
|
Song Wang
|
Yuxin Ren
|
Jianing Deng
|
Jingtong Hu
|
Tianlong Chen
|
Huanrui Yang
Findings of the Association for Computational Linguistics: EMNLP 2025
The Key-Value (KV) cache reading latency increases significantly with context lengths, hindering the efficiency of long-context LLM inference. To address this, previous works propose retaining a small fraction of KV cache based on token importance. For example, KV eviction uses static heuristics to retain tokens, while KV retrieval dynamically selects query-relevant tokens for more adaptive cache management. However, we observe that important tokens are often sparsely distributed across the long context. This sparsity makes existing page-level KV retrieval inaccurate, as each page may include irrelevant tokens and miss critical ones. In this work, we propose Fier, a **Fi**ne-Grained and **E**fficient KV cache **R**etrieval method. Fier uses 1-bit quantized keys to estimate the importance of each token, resulting in efficient and precise retrieval. Experiments show that Fier matches full KV performance using only 11% of the cache budget across various long-context tasks, reducing decoding latency by 1.2× to 1.5×.
Search
Fix author
Co-authors
- Tianlong Chen 1
- Jianing Deng 1
- Jingtong Hu 1
- Zijie Liu 1
- Yuxin Ren 1
- show all...