Mitigating Learnerese Effects for CEFR Classification

Rricha Jalota, Peter Bourgonje, Jan Van Sas, Huiyan Huang


Abstract
The role of an author’s L1 in SLA can be challenging for automated CEFR classification, in that texts from different L1 groups may be too heterogeneous to combine them as training data. We experiment with recent debiasing approaches by attempting to devoid textual representations of L1 features. This results in a more homogeneous group when aggregating CEFR-annotated texts from different L1 groups, leading to better classification performance. Using iterative null-space projection, we marginally improve classification performance for a linear classifier by 1 point. An MLP (e.g. non-linear) classifier remains unaffected by this procedure. We discuss possible directions of future work to attempt to increase this performance gain.
Anthology ID:
2022.bea-1.3
Volume:
Proceedings of the 17th Workshop on Innovative Use of NLP for Building Educational Applications (BEA 2022)
Month:
July
Year:
2022
Address:
Seattle, Washington
Editors:
Ekaterina Kochmar, Jill Burstein, Andrea Horbach, Ronja Laarmann-Quante, Nitin Madnani, Anaïs Tack, Victoria Yaneva, Zheng Yuan, Torsten Zesch
Venue:
BEA
SIG:
SIGEDU
Publisher:
Association for Computational Linguistics
Note:
Pages:
14–21
Language:
URL:
https://aclanthology.org/2022.bea-1.3
DOI:
10.18653/v1/2022.bea-1.3
Bibkey:
Cite (ACL):
Rricha Jalota, Peter Bourgonje, Jan Van Sas, and Huiyan Huang. 2022. Mitigating Learnerese Effects for CEFR Classification. In Proceedings of the 17th Workshop on Innovative Use of NLP for Building Educational Applications (BEA 2022), pages 14–21, Seattle, Washington. Association for Computational Linguistics.
Cite (Informal):
Mitigating Learnerese Effects for CEFR Classification (Jalota et al., BEA 2022)
Copy Citation:
PDF:
https://preview.aclanthology.org/nschneid-patch-4/2022.bea-1.3.pdf
Video:
 https://preview.aclanthology.org/nschneid-patch-4/2022.bea-1.3.mp4