Reanalyzing L2 Preposition Learning with Bayesian Mixed Effects and a Pretrained Language Model

Jakob Prange, Man Ho Ivy Wong


Abstract
We use both Bayesian and neural models to dissect a data set of Chinese learners’ pre- and post-interventional responses to two tests measuring their understanding of English prepositions. The results mostly replicate previous findings from frequentist analyses and newly reveal crucial interactions between student ability, task type, and stimulus sentence. Given the sparsity of the data as well as high diversity among learners, the Bayesian method proves most useful; but we also see potential in using language model probabilities as predictors of grammaticality and learnability.
Anthology ID:
2023.acl-long.712
Volume:
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Month:
July
Year:
2023
Address:
Toronto, Canada
Editors:
Anna Rogers, Jordan Boyd-Graber, Naoaki Okazaki
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
12722–12736
Language:
URL:
https://aclanthology.org/2023.acl-long.712
DOI:
10.18653/v1/2023.acl-long.712
Bibkey:
Cite (ACL):
Jakob Prange and Man Ho Ivy Wong. 2023. Reanalyzing L2 Preposition Learning with Bayesian Mixed Effects and a Pretrained Language Model. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 12722–12736, Toronto, Canada. Association for Computational Linguistics.
Cite (Informal):
Reanalyzing L2 Preposition Learning with Bayesian Mixed Effects and a Pretrained Language Model (Prange & Wong, ACL 2023)
Copy Citation:
PDF:
https://preview.aclanthology.org/emnlp22-frontmatter/2023.acl-long.712.pdf
Video:
 https://preview.aclanthology.org/emnlp22-frontmatter/2023.acl-long.712.mp4