Evaluating Self-Supervised Speech Representations for Indigenous American Languages
Chih-Chen Chen, William Chen, Rodolfo Joel Zevallos, John E. Ortega
Abstract
The application of self-supervision to speech representation learning has garnered significant interest in recent years, due to its scalability to large amounts of unlabeled data. However, much progress, both in terms of pre-training and downstream evaluation, has remained concentrated in monolingual models that only consider English. Few models consider other languages, and even fewer consider indigenous ones. In this work, benchmark the efficacy of large SSL models on 6 indigenous America languages: Quechua, Guarani , Bribri, Kotiria, Wa’ikhana, and Totonac on low-resource ASR. Our results show surprisingly strong performance by state-of-the-art SSL models, showing the potential generalizability of large-scale models to real-world data.- Anthology ID:
- 2024.lrec-main.571
- Volume:
- Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)
- Month:
- May
- Year:
- 2024
- Address:
- Torino, Italia
- Editors:
- Nicoletta Calzolari, Min-Yen Kan, Veronique Hoste, Alessandro Lenci, Sakriani Sakti, Nianwen Xue
- Venues:
- LREC | COLING
- SIG:
- Publisher:
- ELRA and ICCL
- Note:
- Pages:
- 6444–6450
- Language:
- URL:
- https://aclanthology.org/2024.lrec-main.571
- DOI:
- Cite (ACL):
- Chih-Chen Chen, William Chen, Rodolfo Joel Zevallos, and John E. Ortega. 2024. Evaluating Self-Supervised Speech Representations for Indigenous American Languages. In Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024), pages 6444–6450, Torino, Italia. ELRA and ICCL.
- Cite (Informal):
- Evaluating Self-Supervised Speech Representations for Indigenous American Languages (Chen et al., LREC-COLING 2024)
- PDF:
- https://preview.aclanthology.org/nschneid-patch-4/2024.lrec-main.571.pdf