Abstract
Models such as mBERT and XLMR have shown success in solving Code-Mixed NLP tasks even though they were not exposed to such text during pretraining. Code-Mixed NLP models have relied on using synthetically generated data along with naturally occurring data to improve their performance. Finetuning mBERT on such data improves it’s code-mixed performance, but the benefits of using the different types of Code-Mixed data aren’t clear. In this paper, we study the impact of finetuning with different types of code-mixed data and outline the changes that occur to the model during such finetuning. Our findings suggest that using naturally occurring code-mixed data brings in the best performance improvement after finetuning and that finetuning with any type of code-mixed text improves the responsivity of it’s attention heads to code-mixed text inputs.- Anthology ID:
- 2021.adaptnlp-1.12
- Volume:
- Proceedings of the Second Workshop on Domain Adaptation for NLP
- Month:
- April
- Year:
- 2021
- Address:
- Kyiv, Ukraine
- Editors:
- Eyal Ben-David, Shay Cohen, Ryan McDonald, Barbara Plank, Roi Reichart, Guy Rotman, Yftah Ziser
- Venue:
- AdaptNLP
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 111–121
- Language:
- URL:
- https://aclanthology.org/2021.adaptnlp-1.12
- DOI:
- Cite (ACL):
- Sebastin Santy, Anirudh Srinivasan, and Monojit Choudhury. 2021. BERTologiCoMix: How does Code-Mixing interact with Multilingual BERT?. In Proceedings of the Second Workshop on Domain Adaptation for NLP, pages 111–121, Kyiv, Ukraine. Association for Computational Linguistics.
- Cite (Informal):
- BERTologiCoMix: How does Code-Mixing interact with Multilingual BERT? (Santy et al., AdaptNLP 2021)
- PDF:
- https://preview.aclanthology.org/nschneid-patch-2/2021.adaptnlp-1.12.pdf