Zhaoye Fei
2024
Balanced Data Sampling for Language Model Training with Clustering
Yunfan Shao
|
Linyang Li
|
Zhaoye Fei
|
Hang Yan
|
Dahua Lin
|
Xipeng Qiu
Findings of the Association for Computational Linguistics: ACL 2024
Data plays a fundamental role in the training of Large Language Models (LLMs). While attention has been paid to the collection and composition of datasets, determining the data sampling strategy in training remains an open question. Most LLMs are trained with a simple strategy, random sampling. However, this sampling strategy ignores the unbalanced nature of training data distribution, which can be sub-optimal. In this paper, we propose ClusterClip Sampling to balance the text distribution of training data for better model training. Specifically, ClusterClip Sampling utilizes data clustering to reflect the data distribution of the training set and balances the common samples and rare samples during training based on the cluster results. A repetition clip operation is introduced to mitigate the overfitting issue led by samples from certain clusters. Extensive experiments validate the effectiveness of ClusterClip Sampling, which outperforms random sampling and other cluster-based sampling variants under various training datasets and large language models.
2022
Coarse-to-Fine: Hierarchical Multi-task Learning for Natural Language Understanding
Zhaoye Fei
|
Yu Tian
|
Yongkang Wu
|
Xinyu Zhang
|
Yutao Zhu
|
Zheng Liu
|
Jiawen Wu
|
Dejiang Kong
|
Ruofei Lai
|
Zhao Cao
|
Zhicheng Dou
|
Xipeng Qiu
Proceedings of the 29th International Conference on Computational Linguistics
Generalized text representations are the foundation of many natural language understanding tasks. To fully utilize the different corpus, it is inevitable that models need to understand the relevance among them. However, many methods ignore the relevance and adopt a single-channel model (a coarse paradigm) directly for all tasks, which lacks enough rationality and interpretation. In addition, some existing works learn downstream tasks by stitches skill block (a fine paradigm), which might cause irrational results due to its redundancy and noise. In this work, we first analyze the task correlation through three different perspectives, , data property, manual design, and model-based relevance, based on which the similar tasks are grouped together. Then, we propose a hierarchical framework with a coarse-to-fine paradigm, with the bottom level shared to all the tasks, the mid-level divided to different groups, and the top-level assigned to each of the tasks. This allows our model to learn basic language properties from all tasks, boost performance on relevant tasks, and reduce the negative impact from irrelevant tasks. Our experiments on 13 benchmark datasets across five natural language understanding tasks demonstrate the superiority of our method.
Search
Co-authors
- Xipeng Qiu 2
- Yunfan Shao 1
- Linyang Li 1
- Hang Yan 1
- Dahua Lin 1
- show all...