Abstract
We introduce a data augmentation technique based on byte pair encoding and a BERT-like self-attention model to boost performance on spoken language understanding tasks. We compare and evaluate this method with a range of augmentation techniques encompassing generative models such as VAEs and performance-boosting techniques such as synonym replacement and back-translation. We show our method performs strongly on domain and intent classification tasks for a voice assistant and in a user-study focused on utterance naturalness and semantic similarity.- Anthology ID:
- 2021.eacl-main.159
- Volume:
- Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume
- Month:
- April
- Year:
- 2021
- Address:
- Online
- Editors:
- Paola Merlo, Jorg Tiedemann, Reut Tsarfaty
- Venue:
- EACL
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 1852–1860
- Language:
- URL:
- https://aclanthology.org/2021.eacl-main.159
- DOI:
- 10.18653/v1/2021.eacl-main.159
- Cite (ACL):
- Akhila Yerukola, Mason Bretan, and Hongxia Jin. 2021. Data Augmentation for Voice-Assistant NLU using BERT-based Interchangeable Rephrase. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 1852–1860, Online. Association for Computational Linguistics.
- Cite (Informal):
- Data Augmentation for Voice-Assistant NLU using BERT-based Interchangeable Rephrase (Yerukola et al., EACL 2021)
- PDF:
- https://preview.aclanthology.org/emnlp22-frontmatter/2021.eacl-main.159.pdf