Jiayu Zhou
2022
Dynamic Augmentation Data Selection for Few-shot Text Classification
Guangliang Liu
|
Lifeng Jin
|
Owen Yuan
|
Jiayu Zhou
Findings of the Association for Computational Linguistics: EMNLP 2022
Data augmentation has been a popular method for fine-tuning pre-trained language models to increase model robustness and performance. With augmentation data coming from modifying gold train data (in-sample augmentation) or being harvested from general domain unlabeled data (out-of-sample augmentation), the quality of such data is the key to successful fine-tuning. In this paper, we propose a dynamic data selection method to select effective augmentation data from different augmentation sources according to the model’s learning stage, by identifying a set of augmentation samples that optimally facilitates the learning process of the most current model. The method firstly filters out augmentation samples with noisy pseudo labels through a curriculum learning strategy, then estimates the effectiveness of reserved augmentation data by its influence scores on the current model at every update, allowing the data selection process tightly tailored to model parameters. And the two-stage augmentation strategy considers in-sample augmentation and out-of-sample augmentation in different learning stages. Experiments with both kinds of augmentation data on a variety of sentence classification tasks show that our method outperforms strong baselines, proving the effectiveness of our method. Analysis confirms the dynamic nature of the data effectiveness and the importance of model learning stages in utilization of augmentation data.
2011
Using Inverse lambda and Generalization to Translate English to Formal Languages
Chitta Baral
|
Juraj Dzifcak
|
Marcos Alvarez Gonzalez
|
Jiayu Zhou
Proceedings of the Ninth International Conference on Computational Semantics (IWCS 2011)
Search
Co-authors
- Chitta Baral 1
- Juraj Dzifcak 1
- Marcos Alvarez Gonzalez 1
- Guangliang Liu 1
- Lifeng Jin 1
- show all...