Abstract
While neural text-to-speech systems perform remarkably well in high-resource scenarios, they cannot be applied to the majority of the over 6,000 spoken languages in the world due to a lack of appropriate training data. In this work, we use embeddings derived from articulatory vectors rather than embeddings derived from phoneme identities to learn phoneme representations that hold across languages. In conjunction with language agnostic meta learning, this enables us to fine-tune a high-quality text-to-speech model on just 30 minutes of data in a previously unseen language spoken by a previously unseen speaker.- Anthology ID:
- 2022.acl-long.472
- Volume:
- Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
- Month:
- May
- Year:
- 2022
- Address:
- Dublin, Ireland
- Editors:
- Smaranda Muresan, Preslav Nakov, Aline Villavicencio
- Venue:
- ACL
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 6858–6868
- Language:
- URL:
- https://aclanthology.org/2022.acl-long.472
- DOI:
- 10.18653/v1/2022.acl-long.472
- Cite (ACL):
- Florian Lux and Thang Vu. 2022. Language-Agnostic Meta-Learning for Low-Resource Text-to-Speech with Articulatory Features. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 6858–6868, Dublin, Ireland. Association for Computational Linguistics.
- Cite (Informal):
- Language-Agnostic Meta-Learning for Low-Resource Text-to-Speech with Articulatory Features (Lux & Vu, ACL 2022)
- PDF:
- https://preview.aclanthology.org/naacl24-info/2022.acl-long.472.pdf
- Code
- digitalphonetics/ims-toucan
- Data
- CSS10