Ling@CASS Solution to the NLP-TEA CGED Shared Task 2018

Qinan Hu, Yongwei Zhang, Fang Liu, Yueguo Gu


Abstract
In this study, we employ the sequence to sequence learning to model the task of grammar error correction. The system takes potentially erroneous sentences as inputs, and outputs correct sentences. To breakthrough the bottlenecks of very limited size of manually labeled data, we adopt a semi-supervised approach. Specifically, we adapt correct sentences written by native Chinese speakers to generate pseudo grammatical errors made by learners of Chinese as a second language. We use the pseudo data to pre-train the model, and the CGED data to fine-tune it. Being aware of the significance of precision in a grammar error correction system in real scenarios, we use ensembles to boost precision. When using inputs as simple as Chinese characters, the ensembled system achieves a precision at 86.56% in the detection of erroneous sentences, and a precision at 51.53% in the correction of errors of Selection and Missing types.
Anthology ID:
W18-3709
Volume:
Proceedings of the 5th Workshop on Natural Language Processing Techniques for Educational Applications
Month:
July
Year:
2018
Address:
Melbourne, Australia
Editors:
Yuen-Hsien Tseng, Hsin-Hsi Chen, Vincent Ng, Mamoru Komachi
Venue:
NLP-TEA
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
70–76
Language:
URL:
https://aclanthology.org/W18-3709
DOI:
10.18653/v1/W18-3709
Bibkey:
Cite (ACL):
Qinan Hu, Yongwei Zhang, Fang Liu, and Yueguo Gu. 2018. Ling@CASS Solution to the NLP-TEA CGED Shared Task 2018. In Proceedings of the 5th Workshop on Natural Language Processing Techniques for Educational Applications, pages 70–76, Melbourne, Australia. Association for Computational Linguistics.
Cite (Informal):
Ling@CASS Solution to the NLP-TEA CGED Shared Task 2018 (Hu et al., NLP-TEA 2018)
Copy Citation:
PDF:
https://preview.aclanthology.org/emnlp22-frontmatter/W18-3709.pdf