Zhiyuan He
2025
LeanK: Learnable K Cache Channel Pruning for Efficient Decoding
Yike Zhang
|
Zhiyuan He
|
Huiqiang Jiang
|
Chengruidong Zhang
|
Yuqing Yang
|
Jianyong Wang
|
Lili Qiu
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Large language models (LLMs) enable long-context tasks but face efficiency challenges due to the growing key-value (KV) cache. We propose LeanK, a learning-based method that prunes unimportant key (K) cache channels by leveraging static channel sparsity. LeanK reduces GPU memory and accelerates decoding without sacrificing accuracy. Experiments demonstrate up to 70% K cache and 16%–18% V cache memory reduction, and 1.45× decoding speedup. We also provide insights into model channels and attention heads during long-context inference by analyzing the learned importance distribution. Our code is anonymously available at https://anonymous.4open.science/r/LeanK-7A87/README.md.
2024
Position Engineering: Boosting Large Language Models through Positional Information Manipulation
Zhiyuan He
|
Huiqiang Jiang
|
Zilong Wang
|
Yuqing Yang
|
Luna K. Qiu
|
Lili Qiu
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
The performance of large language models (LLMs) is significantly influenced by the quality of the prompts provided. In response, researchers have developed enormous prompt engineering strategies aimed at modifying the prompt text to enhance task performance. In this paper, we introduce a novel technique termed position engineering, which offers a more efficient way to guide large language models. Unlike prompt engineering, which requires substantial effort to modify the text provided to LLMs, position engineering merely involves altering the positional information in the prompt without modifying the text itself. We have evaluated position engineering in two widely-used LLM scenarios: retrieval-augmented generation (RAG) and in-context learning (ICL). Our findings show that position engineering substantially improves upon the baseline in both cases. Position engineering thus represents a promising new strategy for exploiting the capabilities of large language models.
Search
Fix author
Co-authors
- Huiqiang Jiang 2
- Lili Qiu 2
- Yuqing Yang 2
- Luna K. Qiu 1
- Zilong Wang 1
- show all...