Shebuti Rayana
2025
Exploring Cross-Lingual Knowledge Transfer via Transliteration-Based MLM Fine-Tuning for Critically Low-resource Chakma Language
Adity Khisa
|
Nusrat Jahan Lia
|
Tasnim Mahfuz Nafis
|
Zarif Masud
|
Tanzir Pial
|
Shebuti Rayana
|
Ahmedul Kabir
Proceedings of the Second Workshop on Bangla Language Processing (BLP-2025)
As an Indo-Aryan language with limited available data, Chakma remains largely underrepresented in language models. In this work, we introduce a novel corpus of contextually coherent Bangla-transliterated Chakma, curated from Chakma literature, and validated by native speakers. Using this dataset, we fine-tune six encoder-based transformer models, including multilingual (mBERT, XLM-RoBERTa, DistilBERT), regional (BanglaBERT, IndicBERT), and monolingual English (DeBERTaV3) variants on masked language modeling (MLM) tasks. Our experiments show that fine-tuned multilingual models outperform their pre-trained counterparts when adapted to Bangla-transliterated Chakma, achieving up to 73.54% token accuracy and a perplexity as low as 2.90. Our analysis further highlights the impact of data quality on model performance and shows the limitations of OCR pipelines for morphologically rich Indic scripts. Our research demonstrates that Bangla-transliterated Chakma can be very effective for transfer learning for Chakma language, and we release our dataset to encourage further research on multilingual language modeling for low-resource languages.
Search
Fix author
Co-authors
- Ahmedul Kabir 1
- Adity Khisa 1
- Nusrat Jahan Lia 1
- Zarif Masud 1
- Tasnim Mahfuz Nafis 1
- show all...