Data Augmentation using Pre-trained Transformer Models

Varun Kumar, Ashutosh Choudhary, Eunah Cho


Abstract
Language model based pre-trained models such as BERT have provided significant gains across different NLP tasks. In this paper, we study different types of transformer based pre-trained models such as auto-regressive models (GPT-2), auto-encoder models (BERT), and seq2seq models (BART) for conditional data augmentation. We show that prepending the class labels to text sequences provides a simple yet effective way to condition the pre-trained models for data augmentation. Additionally, on three classification benchmarks, pre-trained Seq2Seq model outperforms other data augmentation methods in a low-resource setting. Further, we explore how different pre-trained model based data augmentation differs in-terms of data diversity, and how well such methods preserve the class-label information.
Anthology ID:
2020.lifelongnlp-1.3
Volume:
Proceedings of the 2nd Workshop on Life-long Learning for Spoken Language Systems
Month:
December
Year:
2020
Address:
Suzhou, China
Venue:
lifelongnlp
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
18–26
Language:
URL:
https://aclanthology.org/2020.lifelongnlp-1.3
DOI:
Bibkey:
Cite (ACL):
Varun Kumar, Ashutosh Choudhary, and Eunah Cho. 2020. Data Augmentation using Pre-trained Transformer Models. In Proceedings of the 2nd Workshop on Life-long Learning for Spoken Language Systems, pages 18–26, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
Data Augmentation using Pre-trained Transformer Models (Kumar et al., lifelongnlp 2020)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingestion-script-update/2020.lifelongnlp-1.3.pdf
Code
 varinf/TransformersDataAugmentation +  additional community code
Data
SNIPSSST