Autoregressive Score Generation for Multi-trait Essay Scoring

Heejin Do, Yunsu Kim, Gary Lee


Abstract
Recently, encoder-only pre-trained models such as BERT have been successfully applied in automated essay scoring (AES) to predict a single overall score. However, studies have yet to explore these models in multi-trait AES, possibly due to the inefficiency of replicating BERT-based models for each trait. Breaking away from the existing sole use of *encoder*, we propose an autoregressive prediction of multi-trait scores (ArTS), incorporating a *decoding* process by leveraging the pre-trained T5. Unlike prior regression or classification methods, we redefine AES as a score-generation task, allowing a single model to predict multiple scores. During decoding, the subsequent trait prediction can benefit by conditioning on the preceding trait scores. Experimental results proved the efficacy of ArTS, showing over 5% average improvements in both prompts and traits.
Anthology ID:
2024.findings-eacl.115
Volume:
Findings of the Association for Computational Linguistics: EACL 2024
Month:
March
Year:
2024
Address:
St. Julian’s, Malta
Editors:
Yvette Graham, Matthew Purver
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1659–1666
Language:
URL:
https://preview.aclanthology.org/build-pipeline-with-new-library/2024.findings-eacl.115/
DOI:
Bibkey:
Cite (ACL):
Heejin Do, Yunsu Kim, and Gary Lee. 2024. Autoregressive Score Generation for Multi-trait Essay Scoring. In Findings of the Association for Computational Linguistics: EACL 2024, pages 1659–1666, St. Julian’s, Malta. Association for Computational Linguistics.
Cite (Informal):
Autoregressive Score Generation for Multi-trait Essay Scoring (Do et al., Findings 2024)
Copy Citation:
PDF:
https://preview.aclanthology.org/build-pipeline-with-new-library/2024.findings-eacl.115.pdf
Video:
 https://preview.aclanthology.org/build-pipeline-with-new-library/2024.findings-eacl.115.mp4