Hadiza Ali Umar
2026
Full Fine-Tuning vs. Parameter-Efficient Adaptation for Low-Resource African ASR: A Controlled Study with Whisper-Small
Sukairaj Hafiz Imam | Muhammad Yahuza Bello | Hadiza Ali Umar | Tadesse Destaw Belay | Idris Abdulmumin | Seid Muhie Yimam | Shamsuddeen Hassan Muhammad
Proceedings of the 7th Workshop on African Natural Language Processing (AfricaNLP 2026)
Sukairaj Hafiz Imam | Muhammad Yahuza Bello | Hadiza Ali Umar | Tadesse Destaw Belay | Idris Abdulmumin | Seid Muhie Yimam | Shamsuddeen Hassan Muhammad
Proceedings of the 7th Workshop on African Natural Language Processing (AfricaNLP 2026)
Automatic speech recognition (ASR) for African low-resource languages (LRLs) is often limited by scarce labelled data and the high cost of adapting large foundation models. This study evaluates whether parameter-efficient fine-tuning (PEFT) can serve as a practical alternative to full fine-tuning (FFT) for adapting Whisper-Small with limited labelled speech and constrained compute. We used a 10-hour subset of NaijaVoices covering Hausa, Yorùbá, and Igbo, and we compared FFT with several PEFT strategies under a fixed evaluation protocol. DoRA attains a 22.0% macro-average WER, closely aligning with the 22.1% achieved by FFT while updating only 4M parameters rather than 240M, and this difference remains within run-to-run variation across random seeds. Yorùbá consistently yields the lowest word error rates, whereas Igbo remains the most challenging, indicating that PEFT can deliver near FFT accuracy with substantially lower training and storage requirements for low-resource African ASR.