Are BERTs Sensitive to Native Interference in L2 Production?

Zixin Tang, Prasenjit Mitra, David Reitter


Abstract
With the essays part from The International Corpus Network of Asian Learners of English (ICNALE) and the TOEFL11 corpus, we fine-tuned neural language models based on BERT to predict English learners’ native languages. Results showed neural models can learn to represent and detect such native language impacts, but multilingually trained models have no advantage in doing so.
Anthology ID:
2021.insights-1.6
Volume:
Proceedings of the Second Workshop on Insights from Negative Results in NLP
Month:
November
Year:
2021
Address:
Online and Punta Cana, Dominican Republic
Editors:
João Sedoc, Anna Rogers, Anna Rumshisky, Shabnam Tafreshi
Venue:
insights
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
36–41
Language:
URL:
https://aclanthology.org/2021.insights-1.6
DOI:
10.18653/v1/2021.insights-1.6
Bibkey:
Cite (ACL):
Zixin Tang, Prasenjit Mitra, and David Reitter. 2021. Are BERTs Sensitive to Native Interference in L2 Production?. In Proceedings of the Second Workshop on Insights from Negative Results in NLP, pages 36–41, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Cite (Informal):
Are BERTs Sensitive to Native Interference in L2 Production? (Tang et al., insights 2021)
Copy Citation:
PDF:
https://preview.aclanthology.org/nschneid-patch-1/2021.insights-1.6.pdf
Video:
 https://preview.aclanthology.org/nschneid-patch-1/2021.insights-1.6.mp4