Huanrui Yang


2025

pdf bib
FIER: Fine-Grained and Efficient KV Cache Retrieval for Long-context LLM Inference
Dongwei Wang | Zijie Liu | Song Wang | Yuxin Ren | Jianing Deng | Jingtong Hu | Tianlong Chen | Huanrui Yang
Findings of the Association for Computational Linguistics: EMNLP 2025

The Key-Value (KV) cache reading latency increases significantly with context lengths, hindering the efficiency of long-context LLM inference. To address this, previous works propose retaining a small fraction of KV cache based on token importance. For example, KV eviction uses static heuristics to retain tokens, while KV retrieval dynamically selects query-relevant tokens for more adaptive cache management. However, we observe that important tokens are often sparsely distributed across the long context. This sparsity makes existing page-level KV retrieval inaccurate, as each page may include irrelevant tokens and miss critical ones. In this work, we propose Fier, a **Fi**ne-Grained and **E**fficient KV cache **R**etrieval method. Fier uses 1-bit quantized keys to estimate the importance of each token, resulting in efficient and precise retrieval. Experiments show that Fier matches full KV performance using only 11% of the cache budget across various long-context tasks, reducing decoding latency by 1.2× to 1.5×.

pdf bib
A Survey on Small Language Models
Chien Van Nguyen | Xuan Shen | Ryan Aponte | Yu Xia | Samyadeep Basu | Zhengmian Hu | Jian Chen | Mihir Parmar | Sasidhar Kunapuli | Joe Barrow3 | Junda Wu | Ashish Singh | Yu Wang | Jiuxiang Gu | Nesreen K. Ahmed | Nedim Lipka | Ruiyi Zhang | Xiang Chen | Tong Yu | Sungchul Kim | Hanieh Deilamsalehy | Namyong Park | Michael Rimer | Zhehao Zhang | Huanrui Yang | Puneet Mathur | Gang Wu | Franck Dernoncourt | Ryan Rossi | Thien Huu Nguyen
Proceedings of the 15th International Conference on Recent Advances in Natural Language Processing - Natural Language Processing in the Generative AI Era

Small Language Models (SLMs) have become increasingly important due to their efficiency and performance to perform various language tasks with minimal computational resources, making them ideal for various settings including on-device, mobile, edge devices, among many others. In this article, we present a comprehensive survey on SLMs, focusing on their architectures, training techniques, and model compression techniques. We propose a novel taxonomy for categorizing the methods used to optimize SLMs, including model compression, pruning, and quantization techniques. We summarize the benchmark datasets that are useful for benchmarking SLMs along with the evaluation metrics commonly used. Additionally, we highlight key open challenges that remain to be addressed. Our survey aims to serve as a valuable resource for researchers and practitioners interested in developing and deploying small yet efficient language models.