Abstract
Studies of writing revisions rarely focus on revision quality. To address this issue, we introduce a corpus of between-draft revisions of student argumentative essays, annotated as to whether each revision improves essay quality. We demonstrate a potential usage of our annotations by developing a machine learning model to predict revision improvement. With the goal of expanding training data, we also extract revisions from a dataset edited by expert proofreaders. Our results indicate that blending expert and non-expert revisions increases model performance, with expert data particularly important for predicting low-quality revisions.- Anthology ID:
- W18-0528
- Volume:
- Proceedings of the Thirteenth Workshop on Innovative Use of NLP for Building Educational Applications
- Month:
- June
- Year:
- 2018
- Address:
- New Orleans, Louisiana
- Editors:
- Joel Tetreault, Jill Burstein, Ekaterina Kochmar, Claudia Leacock, Helen Yannakoudakis
- Venue:
- BEA
- SIG:
- SIGEDU
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 240–246
- Language:
- URL:
- https://aclanthology.org/W18-0528
- DOI:
- 10.18653/v1/W18-0528
- Cite (ACL):
- Tazin Afrin and Diane Litman. 2018. Annotation and Classification of Sentence-level Revision Improvement. In Proceedings of the Thirteenth Workshop on Innovative Use of NLP for Building Educational Applications, pages 240–246, New Orleans, Louisiana. Association for Computational Linguistics.
- Cite (Informal):
- Annotation and Classification of Sentence-level Revision Improvement (Afrin & Litman, BEA 2018)
- PDF:
- https://preview.aclanthology.org/emnlp22-frontmatter/W18-0528.pdf