Zi Ye
2024
Adaptive Feature-based Low-Rank Compression of Large Language Models via Bayesian Optimization
Yixin Ji
|
Yang Xiang
|
Juntao Li
|
Qingrong Xia
|
Zi Ye
|
Xinyu Duan
|
Zhefeng Wang
|
Kehai Chen
|
Min Zhang
Findings of the Association for Computational Linguistics: EMNLP 2024
In recent years, large language models (LLMs) have driven advances in natural language processing. Still, their growing scale has increased the computational burden, necessitating a balance between efficiency and performance. Low-rank compression, a promising technique, reduces non-essential parameters by decomposing weight matrices into products of two low-rank matrices. Yet, its application in LLMs has not been extensively studied. The key to low-rank compression lies in low-rank factorization and low-rank dimensions allocation. To address the challenges of low-rank compression in LLMs, we conduct empirical research on the low-rank characteristics of large models. We propose a low-rank compression method suitable for LLMs. This approach involves precise estimation of feature distributions through pooled covariance matrices and a Bayesian optimization strategy for allocating low-rank dimensions. Experiments on the LLaMA-2 models demonstrate that our method outperforms existing strong structured pruning and low-rank compression techniques in maintaining model performance at the same compression ratio.
Search
Co-authors
- Juntao Li 1
- Kehai Chen 1
- Min Zhang (张民) 1
- Qingrong Xia 1
- Xinyu Duan 1
- show all...