Chinese Grammatical Error Detection Based on BERT Model

Yong Cheng, Mofan Duan


Abstract
Automatic grammatical error correction is of great value in assisting second language writing. In 2020, the shared task for Chinese grammatical error diagnosis(CGED) was held in NLP-TEA. As the LDU team, we participated the competition and submitted the final results. Our work mainly focused on grammatical error detection, that is, to judge whether a sentence contains grammatical errors. We used the BERT pre-trained model for binary classification, and we achieve 0.0391 in FPR track, ranking the second in all teams. In error detection track, the accuracy, recall and F-1 of our submitted result are 0.9851, 0.7496 and 0.8514 respectively.
Anthology ID:
2020.nlptea-1.15
Volume:
Proceedings of the 6th Workshop on Natural Language Processing Techniques for Educational Applications
Month:
December
Year:
2020
Address:
Suzhou, China
Editors:
Erhong YANG, Endong XUN, Baolin ZHANG, Gaoqi RAO
Venue:
NLP-TEA
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
108–113
Language:
URL:
https://aclanthology.org/2020.nlptea-1.15
DOI:
Bibkey:
Cite (ACL):
Yong Cheng and Mofan Duan. 2020. Chinese Grammatical Error Detection Based on BERT Model. In Proceedings of the 6th Workshop on Natural Language Processing Techniques for Educational Applications, pages 108–113, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
Chinese Grammatical Error Detection Based on BERT Model (Cheng & Duan, NLP-TEA 2020)
Copy Citation:
PDF:
https://preview.aclanthology.org/nschneid-patch-4/2020.nlptea-1.15.pdf