Shang Qin
2025
Rethinking the Roles of Large Language Models in Chinese Grammatical Error Correction
Yinghui Li
|
Shang Qin
|
Jingheng Ye
|
Haojing Huang
|
Yangning Li
|
Shu-Yu Guo
|
Libo Qin
|
Xuming Hu
|
Wenhao Jiang
|
Hai-Tao Zheng
|
Philip S. Yu
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 6: Industry Track)
Recently, Large Language Models (LLMs) have been widely studied by researchers for their roles in various downstream NLP tasks. As a fundamental task in the NLP field, Chinese Grammatical Error Correction (CGEC) aims to correct all potential grammatical errors in the input sentences. Previous studies have shown that LLMs’ performance as correctors on CGEC remains unsatisfactory due to the challenging nature of the task. To promote the CGEC field to better adapt to the era of LLMs, we rethink the roles of LLMs in the CGEC task so that they can be better utilized and explored in CGEC. Considering the rich grammatical knowledge stored in LLMs and their powerful semantic understanding capabilities, we utilize LLMs as explainers to provide explanation information to the CGEC small models during error correction, aiming to enhance performance. We also use LLMs as evaluators to bring more reasonable CGEC evaluations, thus alleviating the troubles caused by the subjectivity of the CGEC task. In particular, our work is also an active exploration of how LLMs and small models better collaborate in downstream tasks. Extensive experiment and detailed analyses on widely used datasets verify the effectiveness of our intuition and the proposed methods.
Search
Fix author
Co-authors
- Shu-Yu Guo 1
- Xuming Hu 1
- Haojing Huang 1
- Wenhao Jiang 1
- Yinghui Li 1
- show all...
Venues
- acl1