Abstract
The task of grapheme-to-phoneme (G2P) conversion is important for both speech recognition and synthesis. Similar to other speech and language processing tasks, in a scenario where only small-sized training data are available, learning G2P models is challenging. We describe a simple approach of exploiting model ensembles, based on multilingual Transformers and self-training, to develop a highly effective G2P solution for 15 languages. Our models are developed as part of our participation in the SIGMORPHON 2020 Shared Task 1 focused at G2P. Our best models achieve 14.99 word error rate (WER) and 3.30 phoneme error rate (PER), a sizeable improvement over the shared task competitive baselines.- Anthology ID:
- 2020.sigmorphon-1.16
- Volume:
- Proceedings of the 17th SIGMORPHON Workshop on Computational Research in Phonetics, Phonology, and Morphology
- Month:
- July
- Year:
- 2020
- Address:
- Online
- Venue:
- SIGMORPHON
- SIG:
- SIGMORPHON
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 146–152
- Language:
- URL:
- https://aclanthology.org/2020.sigmorphon-1.16
- DOI:
- 10.18653/v1/2020.sigmorphon-1.16
- Cite (ACL):
- Kaili Vesik, Muhammad Abdul-Mageed, and Miikka Silfverberg. 2020. One Model to Pronounce Them All: Multilingual Grapheme-to-Phoneme Conversion With a Transformer Ensemble. In Proceedings of the 17th SIGMORPHON Workshop on Computational Research in Phonetics, Phonology, and Morphology, pages 146–152, Online. Association for Computational Linguistics.
- Cite (Informal):
- One Model to Pronounce Them All: Multilingual Grapheme-to-Phoneme Conversion With a Transformer Ensemble (Vesik et al., SIGMORPHON 2020)
- PDF:
- https://preview.aclanthology.org/starsem-semeval-split/2020.sigmorphon-1.16.pdf