Victor Tolulope Olufemi


2025

pdf bib
Beyond Monolingual Limits: Fine-Tuning Monolingual ASR for Yoruba-English Code-Switching
Oreoluwa Boluwatife Babatunde | Victor Tolulope Olufemi | Emmanuel Bolarinwa | Kausar Yetunde Moshood | Chris Chinenye Emezue
Proceedings of the 7th Workshop on Computational Approaches to Linguistic Code-Switching

Code-switching (CS) presents a significant challenge for Automatic Speech Recognition (ASR) systems, particularly in low-resource settings. While multilingual ASR models like OpenAI Whisper Large v3 are designed to handle multiple languages, their high computational demands make them less practical for real-world deployment in resource-constrained environments. In this study, we investigate the effectiveness of fine-tuning both monolingual and multilingual ASR models for Yoruba-English CS speech. Our results show that unadapted monolingual ASR models outperform Whisper Large v3 in a zero-shot setting on CS speech. Fine-tuning significantly reduces WER for both monolingual and multilingual models, with monolingual models achieving over a 20% WER reduction on CS and Yoruba speech while maintaining lower computational costs. However, we observe a trade-off, as fine-tuning leads to some degradation in English recognition, particularly for multilingual models. Our findings highlight that while multilingual models benefit from fine-tuning, monolingual models provide a computationally efficient and competitive alternative for CS-ASR, making them a viable choice for resource-constrained environments.