HookMoE: A learnable performance compensation strategy of Mixture-of-Experts for LLM inference acceleration

Cheng Longkai, Along He, Mulin Li, Xie Xueshuo, Tao Li


Abstract
Mixture of Experts (MoE) architectures have emerged as a promising paradigm for scaling model capacity through top-k routing mechanisms. Although reducing the number of activated experts inherently enables inference acceleration, this efficiency gain typically comes at the cost of significant performance degradation. To address this trade-off between efficiency and performance, we propose HookMoE, a plug-and-play single-layer compensation framework that effectively restores performance using only a small post-training calibration set. Our method strategically inserts a lightweight trainable Hook module immediately preceding selected transformer blocks. Comprehensive evaluations on four popular MoE models, with an average performance degradation of only 2.5% across various benchmarks, our method reduces the number of activated experts by more than 50% and achieves a 1.42× inference speed-up during the prefill stage. Through systematic analysis, we further reveal that the upper layers require fewer active experts, offering actionable insights for refining dynamic expert selection strategies and enhancing the overall efficiency of MoE models.
Anthology ID:
2025.emnlp-main.1610
Volume:
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2025
Address:
Suzhou, China
Editors:
Christos Christodoulopoulos, Tanmoy Chakraborty, Carolyn Rose, Violet Peng
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
31582–31594
Language:
URL:
https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.1610/
DOI:
Bibkey:
Cite (ACL):
Cheng Longkai, Along He, Mulin Li, Xie Xueshuo, and Tao Li. 2025. HookMoE: A learnable performance compensation strategy of Mixture-of-Experts for LLM inference acceleration. In Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing, pages 31582–31594, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
HookMoE: A learnable performance compensation strategy of Mixture-of-Experts for LLM inference acceleration (Longkai et al., EMNLP 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.1610.pdf
Checklist:
 2025.emnlp-main.1610.checklist.pdf