Olabanji Shonibare
2024
Multi-Stage Multi-Modal Pre-Training for Automatic Speech Recognition
Yash Jain
|
David M. Chan
|
Pranav Dheram
|
Aparna Khare
|
Olabanji Shonibare
|
Venkatesh Ravichandran
|
Shalini Ghosh
Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)
Recent advances in machine learning have demonstrated that multi-modal pre-training can improve automatic speech recognition (ASR) performance compared to randomly initialized models, even when models are fine-tuned on uni-modal tasks. Existing multi-modal pre-training methods for the ASR task have primarily focused on single-stage pre-training where a single unsupervised task is used for pre-training followed by fine-tuning on the downstream task. In this work, we introduce a novel method combining multi-modal and multi-task unsupervised pre-training with a translation-based supervised mid-training approach. We empirically demonstrate that such a multi-stage approach leads to relative word error rate (WER) improvements of up to 38.45% over baselines on both Librispeech and SUPERB. Additionally, we share several important findings for choosing pre-training methods and datasets.
Search
Co-authors
- Yash Jain 1
- David M. Chan 1
- Pranav Dheram 1
- Aparna Khare 1
- Venkatesh Ravichandran 1
- show all...