Cheng Han


2025

pdf bib
MEPT: Mixture of Expert Prompt Tuning as a Manifold Mapper
Runjia Zeng | Guangyan Sun | Qifan Wang | Tong Geng | Sohail Dianat | Xiaotian Han | Raghuveer Rao | Xueling Zhang | Cheng Han | Lifu Huang | Dongfang Liu
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing

Considering deep neural networks as manifold mappers, the pretrain-then-fine-tune paradigm can be interpreted as a two-stage process: pretrain establishes a broad knowledge base, and fine-tune adjusts the model parameters to activate specific neural pathways to align with the target manifold. Although prior fine-tuning approaches demonstrate success, their rigid parameter space limits their ability to dynamically activate appropriate neural pathways, rendering them ill-equipped to adapt flexibly to the diverse and evolving data distributions. In light of this view, we propose a novel approach, Mixture of Expert Prompt Tuning (MEPT), as an effective and efficient manifold-mapping framework. MEPT leverages the Mixture of Experts architecture by integrating multiple prompt experts to adaptively learn diverse and non-stationary data distributions. Empirical evaluations demonstrate that MEPT outperforms several state-of-the-art parameter efficient baselines on SuperGLUE, achieving notable improvements in mean accuracy (e.g., 1.94%) while significantly reducing activated prompts by 79.25%. The effectiveness of MEPT is further supported by theoretical insights from manifold learning and validated through neural activation pathway visualization results. Our code is avaliable at https://runjia.tech/emnlp_mept/.

2024

pdf bib
M2PT: Multimodal Prompt Tuning for Zero-shot Instruction Learning
Taowen Wang | Yiyang Liu | James Chenhao Liang | Junhan Zhao | Yiming Cui | Yuning Mao | Shaoliang Nie | Jiahao Liu | Fuli Feng | Zenglin Xu | Cheng Han | Lifu Huang | Qifan Wang | Dongfang Liu
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing

Multimodal Large Language Models (MLLMs) demonstrate remarkable performance across a wide range of domains, with increasing emphasis on enhancing their zero-shot generalization capabilities for unseen tasks across various modalities. Instruction tuning has emerged as an effective strategy for achieving zero-shot generalization by finetuning pretrained models on diverse multimodal tasks. As the scale of MLLMs continues to grow, parameter-efficient finetuning becomes increasingly critical. However, most existing parameter-efficient approaches focus only on single modalities and often overlook the multimodal characteristics during finetuning. In this work, we introduce a novel Multimodal Prompt Tuning (M2PT) approach for efficient instruction tuning of MLLMs. M2PT effectively integrates visual and textual prompts into the vision encoder and language processor respectively during finetuning, facilitating the extraction and alignment of features across modalities. Empirical results on various multimodal evaluation datasets demonstrate the superior performance of our approach compared to several state-of-the-art baselines. A comprehensive set of ablation studies validates the effectiveness of our prompt design and the efficiency of our approach.