CHESS: Optimizing LLM Inference via Channel-Wise Thresholding and Selective Sparsification

Junhui He, Shangyu Wu, Weidong Wen, Chun Jason Xue, Qingan Li


Abstract
Deploying large language models (LLMs) on edge devices presents significant challenges due to the substantial computational overhead and memory requirements. Activation sparsification can mitigate these resource challenges by reducing the number of activated neurons during inference. Existing methods typically employ thresholding-based sparsification based on the statistics of activation tensors. However, they do not model the impact of activation sparsification on performance, resulting in suboptimal performance degradation. To address the limitations, this paper reformulates the activation sparsification problem to explicitly capture the relationship between activation sparsity and model performance. Then, this paper proposes CHESS, a general activation sparsification approach via CHannel-wise thrEsholding and Selective Sparsification. First, channel-wise thresholding assigns a unique threshold to each activation channel in the feed-forward network (FFN) layers. Then, selective sparsification involves applying thresholding-based activation sparsification to specific layers within the attention modules. Finally, we detail the implementation of sparse kernels to accelerate LLM inference. Experimental results demonstrate that the proposed CHESS achieves lower performance degradation over eight downstream tasks while activating fewer parameters than existing methods, thus speeding up the LLM inference by up to 1.27x.
Anthology ID:
2024.emnlp-main.1038
Volume:
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2024
Address:
Miami, Florida, USA
Editors:
Yaser Al-Onaizan, Mohit Bansal, Yun-Nung Chen
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
18658–18668
Language:
URL:
https://preview.aclanthology.org/Add-Cong-Liu-Florida-Atlantic-University-author-id/2024.emnlp-main.1038/
DOI:
10.18653/v1/2024.emnlp-main.1038
Bibkey:
Cite (ACL):
Junhui He, Shangyu Wu, Weidong Wen, Chun Jason Xue, and Qingan Li. 2024. CHESS: Optimizing LLM Inference via Channel-Wise Thresholding and Selective Sparsification. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, pages 18658–18668, Miami, Florida, USA. Association for Computational Linguistics.
Cite (Informal):
CHESS: Optimizing LLM Inference via Channel-Wise Thresholding and Selective Sparsification (He et al., EMNLP 2024)
Copy Citation:
PDF:
https://preview.aclanthology.org/Add-Cong-Liu-Florida-Atlantic-University-author-id/2024.emnlp-main.1038.pdf