Shan-Hui Cathy Chu
2020
Open-source Multi-speaker Speech Corpora for Building Gujarati, Kannada, Malayalam, Marathi, Tamil and Telugu Speech Synthesis Systems
Fei He
|
Shan-Hui Cathy Chu
|
Oddur Kjartansson
|
Clara Rivera
|
Anna Katanova
|
Alexander Gutkin
|
Isin Demirsahin
|
Cibu Johny
|
Martin Jansche
|
Supheakmungkol Sarin
|
Knot Pipatsrisawat
Proceedings of the Twelfth Language Resources and Evaluation Conference
We present free high quality multi-speaker speech corpora for Gujarati, Kannada, Malayalam, Marathi, Tamil and Telugu, which are six of the twenty two official languages of India spoken by 374 million native speakers. The datasets are primarily intended for use in text-to-speech (TTS) applications, such as constructing multilingual voices or being used for speaker or language adaptation. Most of the corpora (apart from Marathi, which is a female-only database) consist of at least 2,000 recorded lines from female and male native speakers of the language. We present the methodological details behind corpora acquisition, which can be scaled to acquiring data for other languages of interest. We describe the experiments in building a multilingual text-to-speech model that is constructed by combining our corpora. Our results indicate that using these corpora results in good quality voices, with Mean Opinion Scores (MOS) > 3.6, for all the languages tested. We believe that these resources, released with an open-source license, and the described methodology will help in the progress of speech applications for the languages described and aid corpora development for other, smaller, languages of India and beyond.
Crowdsourcing Latin American Spanish for Low-Resource Text-to-Speech
Adriana Guevara-Rukoz
|
Isin Demirsahin
|
Fei He
|
Shan-Hui Cathy Chu
|
Supheakmungkol Sarin
|
Knot Pipatsrisawat
|
Alexander Gutkin
|
Alena Butryna
|
Oddur Kjartansson
Proceedings of the Twelfth Language Resources and Evaluation Conference
In this paper we present a multidialectal corpus approach for building a text-to-speech voice for a new dialect in a language with existing resources, focusing on various South American dialects of Spanish. We first present public speech datasets for Argentinian, Chilean, Colombian, Peruvian, Puerto Rican and Venezuelan Spanish specifically constructed with text-to-speech applications in mind using crowd-sourcing. We then compare the monodialectal voices built with minimal data to a multidialectal model built by pooling all the resources from all dialects. Our results show that the multidialectal model outperforms the monodialectal baseline models. We also experiment with a “zero-resource” dialect scenario where we build a multidialectal voice for a dialect while holding out target dialect recordings from the training data.
Search
Co-authors
- Fei He 2
- Oddur Kjartansson 2
- Alexander Gutkin 2
- Isin Demirsahin 2
- Supheakmungkol Sarin 2
- show all...
Venues
- lrec2