Fine-Tuning a Pre-Trained Wav2Vec2 Model for Automatic Speech Recognition- Experiments with De Zahrar Sproche

Andrea Gulli, Francesco Costantini, Diego Sidraschi, Emanuela Li Destri


Abstract
We present the results of an Automatic Speech Recognition system developed to support linguistic documentation efforts. The test case is the zahrar sproche language, a Southern Bavarian variety spoken in the language island of Sauris/Zahre in Italy. We collected a dataset of 9,000 words and approximately 80 minutes of speech. The goal is to reduce the transcription workload of field linguists. The method used is a deep learning approach based on the language-specific tuning of a generic pre-trained representation model, XLS-R. The transcription quality of the experiments on the collected dataset is promising. We test the model’s performance on some fieldwork historical recordings, report the results, and evaluate them qualitatively. Finally, we indicate possibilities for improvement in this challenging task.
Anthology ID:
2024.lrec-main.645
Volume:
Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)
Month:
May
Year:
2024
Address:
Torino, Italia
Editors:
Nicoletta Calzolari, Min-Yen Kan, Veronique Hoste, Alessandro Lenci, Sakriani Sakti, Nianwen Xue
Venues:
LREC | COLING
SIG:
Publisher:
ELRA and ICCL
Note:
Pages:
7336–7342
Language:
URL:
https://aclanthology.org/2024.lrec-main.645
DOI:
Bibkey:
Cite (ACL):
Andrea Gulli, Francesco Costantini, Diego Sidraschi, and Emanuela Li Destri. 2024. Fine-Tuning a Pre-Trained Wav2Vec2 Model for Automatic Speech Recognition- Experiments with De Zahrar Sproche. In Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024), pages 7336–7342, Torino, Italia. ELRA and ICCL.
Cite (Informal):
Fine-Tuning a Pre-Trained Wav2Vec2 Model for Automatic Speech Recognition- Experiments with De Zahrar Sproche (Gulli et al., LREC-COLING 2024)
Copy Citation:
PDF:
https://preview.aclanthology.org/add_acl24_videos/2024.lrec-main.645.pdf