Abstract
Data augmentation aims at expanding training data with clean text using noising schemes to improve the performance of grammatical error correction (GEC). In practice, there are a great number of real error patterns in the manually annotated training data. We argue that these real error patterns can be introduced into clean text to effectively generate more real and high quality synthetic data, which is not fully explored by previous studies. Moreover, we also find that linguistic knowledge can be incorporated into data augmentation for generating more representative and more diverse synthetic data. In this paper, we propose a novel data augmentation method that fully considers the real error patterns and the linguistic knowledge for the GEC task. We conduct extensive experiments on public data sets and the experimental results show that our method outperforms several strong baselines with far less external unlabeled clean text data, highlighting its extraordinary effectiveness in the GEC task that lacks large-scale labeled training data.- Anthology ID:
- 2021.conll-1.17
- Volume:
- Proceedings of the 25th Conference on Computational Natural Language Learning
- Month:
- November
- Year:
- 2021
- Address:
- Online
- Editors:
- Arianna Bisazza, Omri Abend
- Venue:
- CoNLL
- SIG:
- SIGNLL
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 223–233
- Language:
- URL:
- https://aclanthology.org/2021.conll-1.17
- DOI:
- 10.18653/v1/2021.conll-1.17
- Cite (ACL):
- Xia Li and Junyi He. 2021. Data Augmentation of Incorporating Real Error Patterns and Linguistic Knowledge for Grammatical Error Correction. In Proceedings of the 25th Conference on Computational Natural Language Learning, pages 223–233, Online. Association for Computational Linguistics.
- Cite (Informal):
- Data Augmentation of Incorporating Real Error Patterns and Linguistic Knowledge for Grammatical Error Correction (Li & He, CoNLL 2021)
- PDF:
- https://preview.aclanthology.org/dois-2013-emnlp/2021.conll-1.17.pdf
- Data
- Billion Word Benchmark, FCE, One Billion Word Benchmark