2023
pdf
abs
The SIGMORPHON 2022 Shared Task on Cross-lingual and Low-Resource Grapheme-to-Phoneme Conversion
Arya D. McCarthy
|
Jackson L. Lee
|
Alexandra DeLucia
|
Travis Bartley
|
Milind Agarwal
|
Lucas F.E. Ashby
|
Luca Del Signore
|
Cameron Gibson
|
Reuben Raff
|
Winston Wu
Proceedings of the 20th SIGMORPHON workshop on Computational Research in Phonetics, Phonology, and Morphology
Grapheme-to-phoneme conversion is an important component in many speech technologies, but until recently there were no multilingual benchmarks for this task. The third iteration of the SIGMORPHON shared task on multilingual grapheme-to-phoneme conversion features many improvements from the previous year’s task (Ashby et al., 2021), including additional languages, three subtasks varying the amount of available resources, extensive quality assurance procedures, and automated error analyses. Three teams submitted a total of fifteen systems, at best achieving relative reductions of word error rate of 14% in the crosslingual subtask and 14% in the very-low resource subtask. The generally consistent result is that cross-lingual transfer substantially helps grapheme-to-phoneme modeling, but not to the same degree as in-language examples.
2021
pdf
abs
Results of the Second SIGMORPHON Shared Task on Multilingual Grapheme-to-Phoneme Conversion
Lucas F.E. Ashby
|
Travis M. Bartley
|
Simon Clematide
|
Luca Del Signore
|
Cameron Gibson
|
Kyle Gorman
|
Yeonju Lee-Sikka
|
Peter Makarov
|
Aidan Malanoski
|
Sean Miller
|
Omar Ortiz
|
Reuben Raff
|
Arundhati Sengupta
|
Bora Seo
|
Yulia Spektor
|
Winnie Yan
Proceedings of the 18th SIGMORPHON Workshop on Computational Research in Phonetics, Phonology, and Morphology
Grapheme-to-phoneme conversion is an important component in many speech technologies, but until recently there were no multilingual benchmarks for this task. The second iteration of the SIGMORPHON shared task on multilingual grapheme-to-phoneme conversion features many improvements from the previous year’s task (Gorman et al. 2020), including additional languages, a stronger baseline, three subtasks varying the amount of available resources, extensive quality assurance procedures, and automated error analyses. Four teams submitted a total of thirteen systems, at best achieving relative reductions of word error rate of 11% in the high-resource subtask and 4% in the low-resource subtask.