Improved Speech Representations with Multi-Target Autoregressive Predictive Coding

Yu-An Chung, James Glass


Abstract
Training objectives based on predictive coding have recently been shown to be very effective at learning meaningful representations from unlabeled speech. One example is Autoregressive Predictive Coding (Chung et al., 2019), which trains an autoregressive RNN to generate an unseen future frame given a context such as recent past frames. The basic hypothesis of these approaches is that hidden states that can accurately predict future frames are a useful representation for many downstream tasks. In this paper we extend this hypothesis and aim to enrich the information encoded in the hidden states by training the model to make more accurate future predictions. We propose an auxiliary objective that serves as a regularization to improve generalization of the future frame prediction task. Experimental results on phonetic classification, speech recognition, and speech translation not only support the hypothesis, but also demonstrate the effectiveness of our approach in learning representations that contain richer phonetic content.
Anthology ID:
2020.acl-main.213
Volume:
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics
Month:
July
Year:
2020
Address:
Online
Editors:
Dan Jurafsky, Joyce Chai, Natalie Schluter, Joel Tetreault
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
2353–2358
Language:
URL:
https://aclanthology.org/2020.acl-main.213
DOI:
10.18653/v1/2020.acl-main.213
Bibkey:
Cite (ACL):
Yu-An Chung and James Glass. 2020. Improved Speech Representations with Multi-Target Autoregressive Predictive Coding. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 2353–2358, Online. Association for Computational Linguistics.
Cite (Informal):
Improved Speech Representations with Multi-Target Autoregressive Predictive Coding (Chung & Glass, ACL 2020)
Copy Citation:
PDF:
https://preview.aclanthology.org/nschneid-patch-4/2020.acl-main.213.pdf
Video:
 http://slideslive.com/38928760
Data
LibriSpeech