Sumit Yadav
2026
MaiBERT: A Pre-training Corpus and Language Model for Low-Resourced Maithili Language
Sumit Yadav | Raju Kumar Yadav | Utsav Maskey | Gautam Siddharth Kashyap | Ganesh Gautam | Usman Naseem
Proceedings of the Second Workshop on Language Models for Low-Resource Languages (LoResLM 2026)
Sumit Yadav | Raju Kumar Yadav | Utsav Maskey | Gautam Siddharth Kashyap | Ganesh Gautam | Usman Naseem
Proceedings of the Second Workshop on Language Models for Low-Resource Languages (LoResLM 2026)
Natural Language Understanding (NLU) for low-resource languages remains a major challenge in NLP due to the scarcity of high-quality data and language-specific models. Maithili, despite being spoken by millions, lacks adequate computational resources, limiting its inclusion in digital and AI-driven applications. To address this gap, we introduce maiBERT, a BERT-based language model pre-trained specifically for Maithili using the Masked Language Modeling (MLM) technique. Our model is trained on a newly constructed Maithili corpus and evaluated through a news classification task. In our experiments, maiBERT achieved an accuracy of 87.02%, outperforming existing regional models like NepBERTa and HindiBERT, with a 0.13% overall accuracy gain and 5–7% improvement across various classes. We have open-sourced maiBERT on Hugging Face, enabling further fine-tuning for downstream tasks such as sentiment analysis and Named Entity Recognition (NER).