Sequence-Level Mixed Sample Data Augmentation

Demi Guo, Yoon Kim, Alexander Rush


Abstract
Despite their empirical success, neural networks still have difficulty capturing compositional aspects of natural language. This work proposes a simple data augmentation approach to encourage compositional behavior in neural models for sequence-to-sequence problems. Our approach, SeqMix, creates new synthetic examples by softly combining input/output sequences from the training set. We connect this approach to existing techniques such as SwitchOut and word dropout, and show that these techniques are all essentially approximating variants of a single objective. SeqMix consistently yields approximately 1.0 BLEU improvement on five different translation datasets over strong Transformer baselines. On tasks that require strong compositional generalization such as SCAN and semantic parsing, SeqMix also offers further improvements.
Anthology ID:
2020.emnlp-main.447
Volume:
Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)
Month:
November
Year:
2020
Address:
Online
Editors:
Bonnie Webber, Trevor Cohn, Yulan He, Yang Liu
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
5547–5552
Language:
URL:
https://aclanthology.org/2020.emnlp-main.447
DOI:
10.18653/v1/2020.emnlp-main.447
Bibkey:
Cite (ACL):
Demi Guo, Yoon Kim, and Alexander Rush. 2020. Sequence-Level Mixed Sample Data Augmentation. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 5547–5552, Online. Association for Computational Linguistics.
Cite (Informal):
Sequence-Level Mixed Sample Data Augmentation (Guo et al., EMNLP 2020)
Copy Citation:
PDF:
https://preview.aclanthology.org/naacl24-info/2020.emnlp-main.447.pdf
Optional supplementary material:
 2020.emnlp-main.447.OptionalSupplementaryMaterial.zip
Video:
 https://slideslive.com/38938890
Data
SCAN