Chao Hao
2025
Dynamic Collaboration of Multi-Language Models based on Minimal Complete Semantic Units
Chao Hao
|
Zezheng Wang
|
Yanhua Huang
|
Ruiwen Xu
|
Wenzhe Niu
|
Xin Liu
|
Zitong Yu
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
This paper investigates the enhancement of reasoning capabilities in language models through token-level multi-model collaboration. Our approach selects the optimal tokens from the next token distributions provided by multiple models to perform autoregressive reasoning. Contrary to the assumption that more models yield better results, we introduce a distribution distance-based dynamic selection strategy (DDS) to optimize the multi-model collaboration process. To address the critical challenge of vocabulary misalignment in multi-model collaboration, we propose the concept of minimal complete semantic units (MCSU), which is simple yet enables multiple language models to achieve natural alignment within the linguistic space. Experimental results across various benchmarks demonstrate the superiority of our method. The codes will be released soon.
2022
NSP-BERT: A Prompt-based Few-Shot Learner through an Original Pre-training Task —— Next Sentence Prediction
Yi Sun
|
Yu Zheng
|
Chao Hao
|
Hangping Qiu
Proceedings of the 29th International Conference on Computational Linguistics
Using prompts to utilize language models to perform various downstream tasks, also known as prompt-based learning or prompt-learning, has lately gained significant success in comparison to the pre-train and fine-tune paradigm. Nonetheless, virtually most prompt-based methods are token-level such as PET based on mask language model (MLM). In this paper, we attempt to accomplish several NLP tasks in the zero-shot and few-shot scenarios using a BERT original pre-training task abandoned by RoBERTa and other models——Next Sentence Prediction (NSP). Unlike token-level techniques, our sentence-level prompt-based method NSP-BERT does not need to fix the length of the prompt or the position to be predicted, allowing it to handle tasks such as entity linking with ease. NSP-BERT can be applied to a variety of tasks based on its properties. We present an NSP-tuning approach with binary cross-entropy loss for single-sentence classification tasks that is competitive compared to PET and EFL. By continuing to train BERT on RoBERTa’s corpus, the model’s performance improved significantly, which indicates that the pre-training corpus is another important determinant of few-shot besides model size and prompt method.
Search
Fix author
Co-authors
- Yanhua Huang 1
- Xin Liu (刘鑫) 1
- Wenzhe Niu 1
- Hangping Qiu 1
- Yi Sun 1
- show all...