Jun Fan
2024
UEGP: Unified Expert-Guided Pre-training for Knowledge Rekindle
Yutao Mou
|
Kexiang Wang
|
Jianhe Lin
|
Dehong Ma
|
Jun Fan
|
Daiting Shi
|
Zhicong Cheng
|
Gu Simiu
|
Dawei Yin
|
Weiran Xu
Findings of the Association for Computational Linguistics: NAACL 2024
Pre-training and fine-tuning framework has become the standard training paradigm for NLP tasks and is also widely used in industrial-level applications. However, there are still a limitation with this paradigm: simply fine-tuning with task-specific objectives tends to converge to local minima, resulting in a sub-optimal performance. In this paper, we first propose a new paradigm: knowledge rekindle, which aims to re-incorporate the fine-tuned expert model into the training cycle and break through the performance upper bounds of experts without introducing additional annotated data. Then we further propose a unified expert-guided pre-training (UEGP) framework for knowledge rekindle. Specifically, we reuse fine-tuned expert models for various downstream tasks as knowledge sources and inject task-specific prior knowledge to pre-trained language models (PLMs) by means of knowledge distillation. In this process, we perform multi-task learning with knowledge distillation and masked language modeling (MLM) objectives. We also further explored whether mixture-of-expert guided pre-training (MoEGP) can further enhance the effect of knowledge rekindle. Experiments and analysis on eight datasets in GLUE benchmark and a industrial-level search re-ranking dataset show the effectiveness of our method.
2022
PILE: Pairwise Iterative Logits Ensemble for Multi-Teacher Labeled Distillation
Lianshang Cai
|
Linhao Zhang
|
Dehong Ma
|
Jun Fan
|
Daiting Shi
|
Yi Wu
|
Zhicong Cheng
|
Simiu Gu
|
Dawei Yin
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing: Industry Track
Pre-trained language models have become a crucial part of ranking systems and achieved very impressive effects recently. To maintain high performance while keeping efficient computations, knowledge distillation is widely used. In this paper, we focus on two key questions in knowledge distillation for ranking models: 1) how to ensemble knowledge from multi-teacher; 2) how to utilize the label information of data in the distillation process. We propose a unified algorithm called Pairwise Iterative Logits Ensemble (PILE) to tackle these two questions simultaneously. PILE ensembles multi-teacher logits supervised by label information in an iterative way and achieved competitive performance in both offline and online experiments. The proposed method has been deployed in a real-world commercial search system.
Search
Co-authors
- Daiting Shi 2
- Dawei Yin 2
- Dehong Ma 2
- Gu Simiu 1
- Jianhe Lin 1
- show all...