Assessing Essay Fluency with Large Language Models

Haihong Wu, Chang Ao, Shiwen Ni


Abstract
“With the development of education and the widespread use of the internet, the scale of essay evaluation has increased, making the cost and efficiency of manual grading a significant challenge. To address this, The Twenty-third China National Conference on Computational Linguistics (CCL2024) established evaluation contest for essay fluency. This competition has three tracks corresponding to three sub-tasks. This paper conducts a detailed analysis of different tasks,employing the BERT model as well as the latest popular large language models Qwen to address these sub-tasks. As a result, our overall scores for the three tasks reached 37.26, 42.48, and 47.64.”
Anthology ID:
2024.ccl-3.29
Volume:
Proceedings of the 23rd Chinese National Conference on Computational Linguistics (Volume 3: Evaluations)
Month:
July
Year:
2024
Address:
Taiyuan, China
Editors:
Lin Hongfei, Tan Hongye, Li Bin
Venue:
CCL
SIG:
Publisher:
Chinese Information Processing Society of China
Note:
Pages:
262–268
Language:
English
URL:
https://preview.aclanthology.org/author-degibert/2024.ccl-3.29/
DOI:
Bibkey:
Cite (ACL):
Haihong Wu, Chang Ao, and Shiwen Ni. 2024. Assessing Essay Fluency with Large Language Models. In Proceedings of the 23rd Chinese National Conference on Computational Linguistics (Volume 3: Evaluations), pages 262–268, Taiyuan, China. Chinese Information Processing Society of China.
Cite (Informal):
Assessing Essay Fluency with Large Language Models (Wu et al., CCL 2024)
Copy Citation:
PDF:
https://preview.aclanthology.org/author-degibert/2024.ccl-3.29.pdf