Massively Multilingual Adversarial Speech Recognition

Oliver Adams, Matthew Wiesner, Shinji Watanabe, David Yarowsky


Abstract
We report on adaptation of multilingual end-to-end speech recognition models trained on as many as 100 languages. Our findings shed light on the relative importance of similarity between the target and pretraining languages along the dimensions of phonetics, phonology, language family, geographical location, and orthography. In this context, experiments demonstrate the effectiveness of two additional pretraining objectives in encouraging language-independent encoder representations: a context-independent phoneme objective paired with a language-adversarial classification objective.
Anthology ID:
N19-1009
Volume:
Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)
Month:
June
Year:
2019
Address:
Minneapolis, Minnesota
Editors:
Jill Burstein, Christy Doran, Thamar Solorio
Venue:
NAACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
96–108
Language:
URL:
https://aclanthology.org/N19-1009
DOI:
10.18653/v1/N19-1009
Bibkey:
Cite (ACL):
Oliver Adams, Matthew Wiesner, Shinji Watanabe, and David Yarowsky. 2019. Massively Multilingual Adversarial Speech Recognition. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 96–108, Minneapolis, Minnesota. Association for Computational Linguistics.
Cite (Informal):
Massively Multilingual Adversarial Speech Recognition (Adams et al., NAACL 2019)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-2024-clasp/N19-1009.pdf
Video:
 https://preview.aclanthology.org/ingest-2024-clasp/N19-1009.mp4