Tareq Al Muntasir
2025
BnTTS: Few-Shot Speaker Adaptation in Low-Resource Setting
Mohammad Jahid Ibna Basher
|
Md Kowsher
|
Md Saiful Islam
|
Rabindra Nath Nandi
|
Nusrat Jahan Prottasha
|
Mehadi Hasan Menon
|
Tareq Al Muntasir
|
Shammur Absar Chowdhury
|
Firoj Alam
|
Niloofar Yousefi
|
Ozlem Garibay
Findings of the Association for Computational Linguistics: NAACL 2025
This paper introduces BnTTS (Bangla Text-To-Speech), the first framework for Bangla speaker adaptation-based TTS, designed to bridge the gap in Bangla speech synthesis using minimal training data. Building upon the XTTS architecture, our approach integrates Bangla into a multilingual TTS pipeline, with modifications to account for the phonetic and linguistic characteristics of the language. We pretrain BnTTS on 3.85k hours of Bangla speech dataset with corresponding text labels and evaluate performance in both zero-shot and few-shot settings on our proposed test dataset. Empirical evaluations in few-shot settings show that BnTTS significantly improves the naturalness, intelligibility, and speaker fidelity of synthesized Bangla speech. Compared to state-of-the-art Bangla TTS systems, BnTTS exhibits superior performance in Subjective Mean Opinion Score (SMOS), Naturalness, and Clarity metrics.
TituLLMs: A Family of Bangla LLMs with Comprehensive Benchmarking
Shahriar Kabir Nahin
|
Rabindra Nath Nandi
|
Sagor Sarker
|
Quazi Sarwar Muhtaseem
|
Md Kowsher
|
Apu Chandraw Shill
|
Md Ibrahim
|
Mehadi Hasan Menon
|
Tareq Al Muntasir
|
Firoj Alam
Findings of the Association for Computational Linguistics: ACL 2025
In this paper, we present TituLLMs, the first large pretrained Bangla LLMs, available in 1b and 3b parameter sizes. Due to computational constraints during both training and inference, we focused on smaller models. To train TituLLMs, we collected a pretraining dataset of approximately ∼ 37 billion tokens. We extended the Llama-3.2 tokenizer to incorporate language- and culture-specific knowledge, which also enables faster training and inference. There was a lack of benchmarking datasets to benchmark LLMs for Bangla. To address this gap, we developed five benchmarking datasets. We benchmarked various LLMs, including TituLLMs, and demonstrated that TituLLMs outperforms its initial multilingual versions. However, this is not always the case, highlighting the complexities of language adaptation. Our work lays the groundwork for adapting existing multilingual open models to other low-resource languages. To facilitate broader adoption and further research, we have made the TituLLMs models and benchmarking datasets publicly available.
Search
Fix author
Co-authors
- Firoj Alam 2
- Md Kowsher 2
- Mehadi Hasan Menon 2
- Rabindra Nath Nandi 2
- Mohammad Jahid Ibna Basher 1
- show all...