When Chosen Wisely, More Data Is What You Need: A Universal Sample-Efficient Strategy For Data Augmentation

Ehsan Kamalloo, Mehdi Rezagholizadeh, Ali Ghodsi


Abstract
Data Augmentation (DA) is known to improve the generalizability of deep neural networks. Most existing DA techniques naively add a certain number of augmented samples without considering the quality and the added computational cost of these samples. To tackle this problem, a common strategy, adopted by several state-of-the-art DA methods, is to adaptively generate or re-weight augmented samples with respect to the task objective during training. However, these adaptive DA methods: (1) are computationally expensive and not sample-efficient, and (2) are designed merely for a specific setting. In this work, we present a universal DA technique, called Glitter, to overcome both issues. Glitter can be plugged into any DA method, making training sample-efficient without sacrificing performance. From a pre-generated pool of augmented samples, Glitter adaptively selects a subset of worst-case samples with maximal loss, analogous to adversarial DA. Without altering the training strategy, the task objective can be optimized on the selected subset. Our thorough experiments on the GLUE benchmark, SQuAD, and HellaSwag in three widely used training setups including consistency training, self-distillation and knowledge distillation reveal that Glitter is substantially faster to train and achieves a competitive performance, compared to strong baselines.
Anthology ID:
2022.findings-acl.84
Volume:
Findings of the Association for Computational Linguistics: ACL 2022
Month:
May
Year:
2022
Address:
Dublin, Ireland
Editors:
Smaranda Muresan, Preslav Nakov, Aline Villavicencio
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1048–1062
Language:
URL:
https://aclanthology.org/2022.findings-acl.84
DOI:
10.18653/v1/2022.findings-acl.84
Bibkey:
Cite (ACL):
Ehsan Kamalloo, Mehdi Rezagholizadeh, and Ali Ghodsi. 2022. When Chosen Wisely, More Data Is What You Need: A Universal Sample-Efficient Strategy For Data Augmentation. In Findings of the Association for Computational Linguistics: ACL 2022, pages 1048–1062, Dublin, Ireland. Association for Computational Linguistics.
Cite (Informal):
When Chosen Wisely, More Data Is What You Need: A Universal Sample-Efficient Strategy For Data Augmentation (Kamalloo et al., Findings 2022)
Copy Citation:
PDF:
https://preview.aclanthology.org/emnlp-22-attachments/2022.findings-acl.84.pdf
Code
 huawei-noah/kd-nlp
Data
ANLIGLUEHellaSwagIMDb Movie ReviewsMultiNLIQNLISICKSQuAD