Di Wang
Other people with similar names: Di Wang , Di Wang , Di Wang , Di Wang
2025
HMoE: Heterogeneous Mixture of Experts for Language Modeling
An Wang
|
Xingwu Sun
|
Ruobing Xie
|
Shuaipeng Li
|
Jiaqi Zhu
|
Zhen Yang
|
Pinxue Zhao
|
Weidong Han
|
Zhanhui Kang
|
Di Wang
|
Naoaki Okazaki
|
Cheng-zhong Xu
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Mixture of Experts (MoE) offers remarkable performance and computational efficiency by selectively activating subsets of model parameters. Traditionally, MoE models use homogeneous experts, each with identical capacity. However, varying complexity in input data necessitates experts with diverse capabilities, while homogeneous MoE hinders effective expert specialization and efficient parameter utilization. In this study, we propose a novel Heterogeneous Mixture of Experts (HMoE) framework, where experts differ in size and thus possess diverse capacities. This heterogeneity allows for more specialized experts to handle varying token complexities more effectively. To address the imbalance in expert activation, we propose a novel training objective that encourages the frequent activation of smaller experts, so as to improve computational efficiency and parameter utilization. Extensive experiments demonstrate that HMoE achieves a lower loss rate with fewer activated parameters and outperforms conventional homogeneous MoE models on various pre-training evaluation benchmarks. Codes will be released upon acceptance.
The Security Threat of Compressed Projectors in Large Vision-Language Models
Yudong Zhang
|
Ruobing Xie
|
Xingwu Sun
|
Jiansheng Chen
|
Zhanhui Kang
|
Di Wang
|
Yu Wang
Findings of the Association for Computational Linguistics: EMNLP 2025
The choice of a suitable visual language projector (VLP) is critical to the successful training of large visual language models (LVLMs). Mainstream VLPs can be broadly categorized into compressed and uncompressed projectors, and each offers distinct advantages in performance and computational efficiency. However, their security implications have not been thoroughly examined. Our comprehensive evaluation reveals significant differences in their security profiles: compressed projectors exhibit substantial vulnerabilities, allowing adversaries to successfully compromise LVLMs even with minimal knowledge of structure information. In stark contrast, uncompressed projectors demonstrate robust security properties and do not introduce additional vulnerabilities. These findings provide critical guidance for researchers in selecting optimal VLPs that enhance the security and reliability of visual language models. The code is available at https://github.com/btzyd/TCP.
Search
Fix author
Co-authors
- Zhanhui Kang 2
- Xingwu Sun 2
- Ruobing Xie 2
- Jiansheng Chen 1
- Weidong Han 1
- show all...