Michael Leventhal
2026
Dealing with the Hard Facts of Low-Resource African NLP
Michael Leventhal | Yacouba Diarra | Nouhoum Coulibaly | Panga Azazia Kamaté | Aymane Dembélé | Madani Amadou Tall | Emmanuel Elise Kone
Proceedings of the 7th Workshop on African Natural Language Processing (AfricaNLP 2026)
Michael Leventhal | Yacouba Diarra | Nouhoum Coulibaly | Panga Azazia Kamaté | Aymane Dembélé | Madani Amadou Tall | Emmanuel Elise Kone
Proceedings of the 7th Workshop on African Natural Language Processing (AfricaNLP 2026)
Creating speech datasets, models, and evaluation frameworks for low-resource languages remains challenging given the lack of a broad base of pertinent experience to draw from. This paper reports on the field collection of 612 hours of spontaneous speech in Bambara, a low-resource West African language; the semi-automated annotation of that dataset with transcriptions; the creation of several monolingual ultra-compact and small models using the dataset; and the automatic and human evaluation of their output. We offer practical suggestions for data collection protocols, annotation, and model design, as well as evidence for the importance of performing human evaluation. In addition to the main dataset, multiple evaluation datasets, models, and code are made publicly available.
Kunnafonidilaw ka Cadeau: an ASR dataset of present-day Bambara
Michael Leventhal | Yacouba Diarra | Nouhoum Coulibaly | Panga Azazia Kamaté
Proceedings of the 7th Workshop on African Natural Language Processing (AfricaNLP 2026)
Michael Leventhal | Yacouba Diarra | Nouhoum Coulibaly | Panga Azazia Kamaté
Proceedings of the 7th Workshop on African Natural Language Processing (AfricaNLP 2026)
We present Kunkado, a 160-hour Bambara ASR dataset compiled from Malian radio archives to capture present-day spontaneous speech across a wide range of topics. It includes code-switching, disfluencies, background noise, and overlapping speakers that practical ASR systems encounter in real-world use. We finetuned Parakeet-based models on a 33.47-hour human-reviewed subset and apply pragmatic transcript normalization to reduce variability in number formatting, tags, and code-switching annotations. Evaluated on two real-world test sets, finetuning with Kunkado reduces WER from 44.47% to 37.12% on one and from 36.07% to 32.33% on the other. In human evaluation, the resulting model also outperforms a comparable system with the same architecture trained on 98 hours of cleaner, less realistic speech. We release the data and models to support robust ASR for predominantly oral languages.
2025
GAIfE: Using GenAI to Improve Literacy in Low-resourced Settings
Allahsera Auguste Tapo | Nouhoum Coulibaly | Seydou Diallo | Sebastien Diarra | Christopher M Homan | Mamadou K. Keita | Michael Leventhal
Findings of the Association for Computational Linguistics: NAACL 2025
Allahsera Auguste Tapo | Nouhoum Coulibaly | Seydou Diallo | Sebastien Diarra | Christopher M Homan | Mamadou K. Keita | Michael Leventhal
Findings of the Association for Computational Linguistics: NAACL 2025
Illiteracy is a predictor of many negative social and personal outcomes. Illiteracy rates are particularly high in countries with underresourced languages, where few books exist that are suitable for children to learn to read from. We present GAIfE (Generative AI for Education), a toolchain and workflow developed through empirical methods, that demonstrates how existing tools can be adapted to address low literacy for an underresourced language. We used GAIfE (a play on the Bambara word for “book”) to construct materials for developing children’s reading competence in Bambara, the vehicular language of Mali. Our approach to the generation and post-generation editing of content skewed by the Global-North-centric bias of available LLMs, enabled us to rapidly multiply the content in Bambara available online by 10 times while maintaining high standards of attractiveness of the material to maintain high engagement, accurate representation of the Malian culture and physical and social environment and language quality. Using our materials, pilot reading programs achieved a 67% reduction in the number of children unable to read Bambara. Our approach demonstrated the power of bias-aware application of generative AI to the problem domain as well as the potential impact the application of this technology could have on reducing illiteracy and improving learning outcomes through native language education.
2020
Neural Machine Translation for Extremely Low-Resource African Languages: A Case Study on Bambara
Allahsera Auguste Tapo | Bakary Coulibaly | Sébastien Diarra | Christopher Homan | Julia Kreutzer | Sarah Luger | Arthur Nagashima | Marcos Zampieri | Michael Leventhal
Proceedings of the 3rd Workshop on Technologies for MT of Low Resource Languages
Allahsera Auguste Tapo | Bakary Coulibaly | Sébastien Diarra | Christopher Homan | Julia Kreutzer | Sarah Luger | Arthur Nagashima | Marcos Zampieri | Michael Leventhal
Proceedings of the 3rd Workshop on Technologies for MT of Low Resource Languages
Low-resource languages present unique challenges to (neural) machine translation. We discuss the case of Bambara, a Mande language for which training data is scarce and requires significant amounts of pre-processing. More than the linguistic situation of Bambara itself, the socio-cultural context within which Bambara speakers live poses challenges for automated processing of this language. In this paper, we present the first parallel data set for machine translation of Bambara into and from English and French and the first benchmark results on machine translation to and from Bambara. We discuss challenges in working with low-resource languages and propose strategies to cope with data scarcity in low-resource machine translation (MT).