Modeling Language Learning in Corrective Feedback Interactions

Juan Luis Castro-Garcia, Parisa Kordjamshidi


Abstract
To study computational models for language acquisition, we propose an interactive computational framework that utilizes a miniature language acquisition dataset in a controlled environment. In this framework, a neural learner model interacts with a teacher model that provides corrective feedback. Within this framework, we investigate various corrective feedback strategies, specifically focusing on reformulations and their effect on the learner model during their interactions. We design experimental settings to evaluate the learner’s production of syntactically and semantically correct linguistic utterances and perception of concepts and word-meaning associations.These results offer insights into the effectiveness of different feedback strategies in language acquisition using artificial neural networks. The outcome of this research is establishing a framework with a dataset for the systematic evaluation of various aspects of language acquisition in a controlled environment.
Anthology ID:
2025.starsem-1.21
Volume:
Proceedings of the 14th Joint Conference on Lexical and Computational Semantics (*SEM 2025)
Month:
November
Year:
2025
Address:
Suzhou, China
Editors:
Lea Frermann, Mark Stevenson
Venue:
*SEM
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
267–279
Language:
URL:
https://preview.aclanthology.org/ingest-emnlp/2025.starsem-1.21/
DOI:
Bibkey:
Cite (ACL):
Juan Luis Castro-Garcia and Parisa Kordjamshidi. 2025. Modeling Language Learning in Corrective Feedback Interactions. In Proceedings of the 14th Joint Conference on Lexical and Computational Semantics (*SEM 2025), pages 267–279, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
Modeling Language Learning in Corrective Feedback Interactions (Castro-Garcia & Kordjamshidi, *SEM 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-emnlp/2025.starsem-1.21.pdf