Improving Grammatical Error Correction via Pre-Training a Copy-Augmented Architecture with Unlabeled Data

Wei Zhao, Liang Wang, Kewei Shen, Ruoyu Jia, Jingming Liu


Abstract
Neural machine translation systems have become state-of-the-art approaches for Grammatical Error Correction (GEC) task. In this paper, we propose a copy-augmented architecture for the GEC task by copying the unchanged words from the source sentence to the target sentence. Since the GEC suffers from not having enough labeled training data to achieve high accuracy. We pre-train the copy-augmented architecture with a denoising auto-encoder using the unlabeled One Billion Benchmark and make comparisons between the fully pre-trained model and a partially pre-trained model. It is the first time copying words from the source context and fully pre-training a sequence to sequence model are experimented on the GEC task. Moreover, We add token-level and sentence-level multi-task learning for the GEC task. The evaluation results on the CoNLL-2014 test set show that our approach outperforms all recently published state-of-the-art results by a large margin.
Anthology ID:
N19-1014
Volume:
Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)
Month:
June
Year:
2019
Address:
Minneapolis, Minnesota
Editors:
Jill Burstein, Christy Doran, Thamar Solorio
Venue:
NAACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
156–165
Language:
URL:
https://aclanthology.org/N19-1014
DOI:
10.18653/v1/N19-1014
Bibkey:
Cite (ACL):
Wei Zhao, Liang Wang, Kewei Shen, Ruoyu Jia, and Jingming Liu. 2019. Improving Grammatical Error Correction via Pre-Training a Copy-Augmented Architecture with Unlabeled Data. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 156–165, Minneapolis, Minnesota. Association for Computational Linguistics.
Cite (Informal):
Improving Grammatical Error Correction via Pre-Training a Copy-Augmented Architecture with Unlabeled Data (Zhao et al., NAACL 2019)
Copy Citation:
PDF:
https://preview.aclanthology.org/add_acl24_videos/N19-1014.pdf
Video:
 https://preview.aclanthology.org/add_acl24_videos/N19-1014.mp4
Code
 zhawe01/fairseq-gec +  additional community code
Data
Billion Word BenchmarkCoNLLCoNLL-2014 Shared Task: Grammatical Error CorrectionFCEJFLEGOne Billion Word Benchmark