Wenchao Gu
2024
XMoE: Sparse Models with Fine-grained and Adaptive Expert Selection
Yuanhang Yang
|
Shiyi Qi
|
Wenchao Gu
|
Chaozheng Wang
|
Cuiyun Gao
|
Zenglin Xu
Findings of the Association for Computational Linguistics ACL 2024
Sparse models, including sparse Mixture-of-Experts (MoE) models, have emerged as an effective approach for scaling Transformer models. However, they often suffer from computational inefficiency since a significant number of parameters are unnecessarily involved in computations by multiplying values by zero or low activation values. To address this issue, we present XMoE, a novel MoE designed to enhance both the efficacy and efficiency of sparse MoE models. XMoE leverages small experts and a threshold-based router to enable tokens to selectively engage only essential parameters. Our extensive experiments on language modeling and machine translation tasks demonstrate that enhances model performance and can decrease the computation load at MoE layers by over 50% without sacrificing performance. Furthermore, we present the versatility of by applying it to dense models, enabling sparse computation during inference. We provide a comprehensive analysis and make our code available at https://anonymous.4open.science/r/XMoE.
2022
Accelerating Code Search with Deep Hashing and Code Classification
Wenchao Gu
|
Yanlin Wang
|
Lun Du
|
Hongyu Zhang
|
Shi Han
|
Dongmei Zhang
|
Michael Lyu
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Code search is to search reusable code snippets from source code corpus based on natural languages queries. Deep learning-based methods on code search have shown promising results. However, previous methods focus on retrieval accuracy, but lacked attention to the efficiency of the retrieval process. We propose a novel method CoSHC to accelerate code search with deep hashing and code classification, aiming to perform efficient code search without sacrificing too much accuracy. To evaluate the effectiveness of CoSHC, we apply our methodon five code search models. Extensive experimental results indicate that compared with previous code search baselines, CoSHC can save more than 90% of retrieval time meanwhile preserving at least 99% of retrieval accuracy.
Search
Co-authors
- Yuanhang Yang 1
- Shiyi Qi 1
- Chaozheng Wang 1
- Cuiyun Gao 1
- Zenglin Xu 1
- show all...