Attention-Passing Models for Robust and Data-Efficient End-to-End Speech Translation

Matthias Sperber, Graham Neubig, Jan Niehues, Alex Waibel


Abstract
Speech translation has traditionally been approached through cascaded models consisting of a speech recognizer trained on a corpus of transcribed speech, and a machine translation system trained on parallel texts. Several recent works have shown the feasibility of collapsing the cascade into a single, direct model that can be trained in an end-to-end fashion on a corpus of translated speech. However, experiments are inconclusive on whether the cascade or the direct model is stronger, and have only been conducted under the unrealistic assumption that both are trained on equal amounts of data, ignoring other available speech recognition and machine translation corpora. In this paper, we demonstrate that direct speech translation models require more data to perform well than cascaded models, and although they allow including auxiliary data through multi-task training, they are poor at exploiting such data, putting them at a severe disadvantage. As a remedy, we propose the use of end- to-end trainable models with two attention mechanisms, the first establishing source speech to source text alignments, the second modeling source to target text alignment. We show that such models naturally decompose into multi-task–trainable recognition and translation tasks and propose an attention-passing technique that alleviates error propagation issues in a previous formulation of a model with two attention stages. Our proposed model outperforms all examined baselines and is able to exploit auxiliary training data much more effectively than direct attentional models.
Anthology ID:
Q19-1020
Volume:
Transactions of the Association for Computational Linguistics, Volume 7
Month:
Year:
2019
Address:
Cambridge, MA
Editors:
Lillian Lee, Mark Johnson, Brian Roark, Ani Nenkova
Venue:
TACL
SIG:
Publisher:
MIT Press
Note:
Pages:
313–325
Language:
URL:
https://aclanthology.org/Q19-1020
DOI:
10.1162/tacl_a_00270
Bibkey:
Cite (ACL):
Matthias Sperber, Graham Neubig, Jan Niehues, and Alex Waibel. 2019. Attention-Passing Models for Robust and Data-Efficient End-to-End Speech Translation. Transactions of the Association for Computational Linguistics, 7:313–325.
Cite (Informal):
Attention-Passing Models for Robust and Data-Efficient End-to-End Speech Translation (Sperber et al., TACL 2019)
Copy Citation:
PDF:
https://preview.aclanthology.org/naacl-24-ws-corrections/Q19-1020.pdf
Video:
 https://preview.aclanthology.org/naacl-24-ws-corrections/Q19-1020.mp4