Quazi Sarwar Muhtaseem
2025
TituLLMs: A Family of Bangla LLMs with Comprehensive Benchmarking
Shahriar Kabir Nahin
|
Rabindra Nath Nandi
|
Sagor Sarker
|
Quazi Sarwar Muhtaseem
|
Md Kowsher
|
Apu Chandraw Shill
|
Md Ibrahim
|
Mehadi Hasan Menon
|
Tareq Al Muntasir
|
Firoj Alam
Findings of the Association for Computational Linguistics: ACL 2025
In this paper, we present TituLLMs, the first large pretrained Bangla LLMs, available in 1b and 3b parameter sizes. Due to computational constraints during both training and inference, we focused on smaller models. To train TituLLMs, we collected a pretraining dataset of approximately ∼ 37 billion tokens. We extended the Llama-3.2 tokenizer to incorporate language- and culture-specific knowledge, which also enables faster training and inference. There was a lack of benchmarking datasets to benchmark LLMs for Bangla. To address this gap, we developed five benchmarking datasets. We benchmarked various LLMs, including TituLLMs, and demonstrated that TituLLMs outperforms its initial multilingual versions. However, this is not always the case, highlighting the complexities of language adaptation. Our work lays the groundwork for adapting existing multilingual open models to other low-resource languages. To facilitate broader adoption and further research, we have made the TituLLMs models and benchmarking datasets publicly available.
2023
Pseudo-Labeling for Domain-Agnostic Bangla Automatic Speech Recognition
Rabindra Nath Nandi
|
Mehadi Menon
|
Tareq Muntasir
|
Sagor Sarker
|
Quazi Sarwar Muhtaseem
|
Md. Tariqul Islam
|
Shammur Chowdhury
|
Firoj Alam
Proceedings of the First Workshop on Bangla Language Processing (BLP-2023)
One of the major challenges for developing automatic speech recognition (ASR) for low-resource languages is the limited access to labeled data with domain-specific variations. In this study, we propose a pseudo-labeling approach to develop a large-scale domain-agnostic ASR dataset. With the proposed methodology, we developed a 20k+ hours labeled Bangla speech dataset covering diverse topics, speaking styles, dialects, noisy environments, and conversational scenarios. We then exploited the developed corpus to design a conformer-based ASR system. We benchmarked the trained ASR with publicly available datasets and compared it with other available models. To investigate the efficacy, we designed and developed a human-annotated domain-agnostic test set composed of news, telephony, and conversational data among others. Our results demonstrate the efficacy of the model trained on psuedo-label data for the designed test-set along with publicly-available Bangla datasets. The experimental resources will be publicly available.https://github.com/hishab-nlp/Pseudo-Labeling-for-Domain-Agnostic-Bangla-ASR