Identifying and analyzing ‘noisy’ spelling errors in a second language corpus

Alan Juffs, Ben Naismith


Abstract
This paper addresses the problem of identifying and analyzing ‘noisy’ spelling errors in texts written by second language (L2) learners’ texts in a written corpus. Using Python, spelling errors were identified in 5774 texts greater than or equal to 66 words (total=1,814,209 words), selected from a corpus of 4.2 million words (Authors-1). The statistical analysis used hurdle() models in R, which are appropriate for non-normal, count data, with many zeros.
Anthology ID:
2025.wnut-1.4
Volume:
Proceedings of the Tenth Workshop on Noisy and User-generated Text
Month:
May
Year:
2025
Address:
Albuquerque, New Mexico, USA
Editors:
JinYeong Bak, Rob van der Goot, Hyeju Jang, Weerayut Buaphet, Alan Ramponi, Wei Xu, Alan Ritter
Venues:
WNUT | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
26–37
Language:
URL:
https://preview.aclanthology.org/Ingest-2025-COMPUTEL/2025.wnut-1.4/
DOI:
Bibkey:
Cite (ACL):
Alan Juffs and Ben Naismith. 2025. Identifying and analyzing ‘noisy’ spelling errors in a second language corpus. In Proceedings of the Tenth Workshop on Noisy and User-generated Text, pages 26–37, Albuquerque, New Mexico, USA. Association for Computational Linguistics.
Cite (Informal):
Identifying and analyzing ‘noisy’ spelling errors in a second language corpus (Juffs & Naismith, WNUT 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/Ingest-2025-COMPUTEL/2025.wnut-1.4.pdf