AMR-DA: Data Augmentation by Abstract Meaning Representation

Ziyi Shou, Yuxin Jiang, Fangzhen Lin


Abstract
Abstract Meaning Representation (AMR) is a semantic representation for NLP/NLU. In this paper, we propose to use it for data augmentation in NLP. Our proposed data augmentation technique, called AMR-DA, converts a sample sentence to an AMR graph, modifies the graph according to various data augmentation policies, and then generates augmentations from graphs. Our method combines both sentence-level techniques like back translation and token-level techniques like EDA (Easy Data Augmentation). To evaluate the effectiveness of our method, we apply it to the tasks of semantic textual similarity (STS) and text classification. For STS, our experiments show that AMR-DA boosts the performance of the state-of-the-art models on several STS benchmarks. For text classification, AMR-DA outperforms EDA and AEDA and leads to more robust improvements.
Anthology ID:
2022.findings-acl.244
Volume:
Findings of the Association for Computational Linguistics: ACL 2022
Month:
May
Year:
2022
Address:
Dublin, Ireland
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
3082–3098
Language:
URL:
https://aclanthology.org/2022.findings-acl.244
DOI:
10.18653/v1/2022.findings-acl.244
Bibkey:
Cite (ACL):
Ziyi Shou, Yuxin Jiang, and Fangzhen Lin. 2022. AMR-DA: Data Augmentation by Abstract Meaning Representation. In Findings of the Association for Computational Linguistics: ACL 2022, pages 3082–3098, Dublin, Ireland. Association for Computational Linguistics.
Cite (Informal):
AMR-DA: Data Augmentation by Abstract Meaning Representation (Shou et al., Findings 2022)
Copy Citation:
PDF:
https://preview.aclanthology.org/auto-file-uploads/2022.findings-acl.244.pdf
Code
 zzshou/amr-data-augmentation