Abstract
In a pipeline speech translation system, automatic speech recognition (ASR) system will transmit errors in recognition to the downstream machine translation (MT) system. A standard machine translation system is usually trained on parallel corpus composed of clean text and will perform poorly on text with recognition noise, a gap well known in speech translation community. In this paper, we propose a training architecture which aims at making a neural machine translation model more robust against speech recognition errors. Our approach addresses the encoder and the decoder simultaneously using adversarial learning and data augmentation, respectively. Experimental results on IWSLT2018 speech translation task show that our approach can bridge the gap between the ASR output and the MT input, outperforms the baseline by up to 2.83 BLEU on noisy ASR output, while maintaining close performance on clean text.- Anthology ID:
- 2019.iwslt-1.29
- Volume:
- Proceedings of the 16th International Conference on Spoken Language Translation
- Month:
- November 2-3
- Year:
- 2019
- Address:
- Hong Kong
- Editors:
- Jan Niehues, Rolando Cattoni, Sebastian Stüker, Matteo Negri, Marco Turchi, Thanh-Le Ha, Elizabeth Salesky, Ramon Sanabria, Loic Barrault, Lucia Specia, Marcello Federico
- Venue:
- IWSLT
- SIG:
- SIGSLT
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- Language:
- URL:
- https://aclanthology.org/2019.iwslt-1.29
- DOI:
- Cite (ACL):
- Qiao Cheng, Meiyuan Fan, Yaqian Han, Jin Huang, and Yitao Duan. 2019. Breaking the Data Barrier: Towards Robust Speech Translation via Adversarial Stability Training. In Proceedings of the 16th International Conference on Spoken Language Translation, Hong Kong. Association for Computational Linguistics.
- Cite (Informal):
- Breaking the Data Barrier: Towards Robust Speech Translation via Adversarial Stability Training (Cheng et al., IWSLT 2019)
- PDF:
- https://preview.aclanthology.org/nschneid-patch-4/2019.iwslt-1.29.pdf