Abstract
While steady progress has been made on the task of automated essay scoring (AES) in the past decade, much of the recent work in this area has focused on developing models that beat existing models on a standard evaluation dataset. While improving performance numbers remains an important goal in the short term, such a focus is not necessarily beneficial for the long-term development of the field. We reflect on the state of the art in AES research, discussing issues that we believe can encourage researchers to think bigger than improving performance numbers with the ultimate goal of triggering discussion among AES researchers on how we should move forward.- Anthology ID:
- 2024.emnlp-main.991
- Volume:
- Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
- Month:
- November
- Year:
- 2024
- Address:
- Miami, Florida, USA
- Editors:
- Yaser Al-Onaizan, Mohit Bansal, Yun-Nung Chen
- Venue:
- EMNLP
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 17876–17888
- Language:
- URL:
- https://aclanthology.org/2024.emnlp-main.991
- DOI:
- 10.18653/v1/2024.emnlp-main.991
- Cite (ACL):
- Shengjie Li and Vincent Ng. 2024. Automated Essay Scoring: A Reflection on the State of the Art. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, pages 17876–17888, Miami, Florida, USA. Association for Computational Linguistics.
- Cite (Informal):
- Automated Essay Scoring: A Reflection on the State of the Art (Li & Ng, EMNLP 2024)
- PDF:
- https://preview.aclanthology.org/dois-2013-emnlp/2024.emnlp-main.991.pdf