Xiaozheng Dong
2024
Mini-DA: Improving Your Model Performance through Minimal Data Augmentation using LLM
Shuangtao Yang
|
Xiaoyi Liu
|
Xiaozheng Dong
|
Bo Fu
Proceedings of the Fifth Workshop on Data Science with Human-in-the-Loop (DaSH 2024)
When performing data augmentation using large language models (LLMs), the common approach is to directly generate a large number of new samples based on the original dataset, and then model is trained on the integration of augmented dataset and the original dataset. However, data generation demands extensive computational resources. In this study, we propose Mini-DA, a minimized data augmentation method that leverages the feedback from the target model during the training process to select only the most challenging samples from the validation set for augmentation. Our experimental results show in text classification task, by using as little as 13 percent of the original augmentation volume, Mini-DA can achieve performance comparable to full data augmentation for intent detection task, significantly improving data and computational resource utilization efficiency.
Search