TMU-NLP System Using BERT-based Pre-trained Model to the NLP-TEA CGED Shared Task 2020

Hongfei Wang, Mamoru Komachi


Abstract
In this paper, we introduce our system for NLPTEA 2020 shared task of Chinese Grammatical Error Diagnosis (CGED). In recent years, pre-trained models have been extensively studied, and several downstream tasks have benefited from their utilization. In this study, we treat the grammar error diagnosis (GED) task as a grammatical error correction (GEC) problem and propose a method that incorporates a pre-trained model into an encoder-decoder model to solve this problem.
Anthology ID:
2020.nlptea-1.11
Volume:
Proceedings of the 6th Workshop on Natural Language Processing Techniques for Educational Applications
Month:
December
Year:
2020
Address:
Suzhou, China
Editors:
Erhong YANG, Endong XUN, Baolin ZHANG, Gaoqi RAO
Venue:
NLP-TEA
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
87–90
Language:
URL:
https://aclanthology.org/2020.nlptea-1.11
DOI:
Bibkey:
Cite (ACL):
Hongfei Wang and Mamoru Komachi. 2020. TMU-NLP System Using BERT-based Pre-trained Model to the NLP-TEA CGED Shared Task 2020. In Proceedings of the 6th Workshop on Natural Language Processing Techniques for Educational Applications, pages 87–90, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
TMU-NLP System Using BERT-based Pre-trained Model to the NLP-TEA CGED Shared Task 2020 (Wang & Komachi, NLP-TEA 2020)
Copy Citation:
PDF:
https://preview.aclanthology.org/naacl-24-ws-corrections/2020.nlptea-1.11.pdf