Abstract
Automatic speech translation is sensitive to speech recognition errors, but in a multilingual scenario, the same content may be available in various languages via simultaneous interpreting, dubbing or subtitling. In this paper, we hypothesize that leveraging multiple sources will improve translation quality if the sources complement one another in terms of correct information they contain. To this end, we first show that on a 10-hour ESIC corpus, the ASR errors in the original English speech and its simultaneous interpreting into German and Czech are mutually independent. We then use two sources, English and German, in a multi-source setting for translation into Czech to establish its robustness to ASR errors. Furthermore, we observe this robustness when translating both noisy sources together in a simultaneous translation setting. Our results show that multi-source neural machine translation has the potential to be useful in a real-time simultaneous translation setting, thereby motivating further investigation in this area.- Anthology ID:
- 2023.findings-acl.228
- Volume:
- Findings of the Association for Computational Linguistics: ACL 2023
- Month:
- July
- Year:
- 2023
- Address:
- Toronto, Canada
- Editors:
- Anna Rogers, Jordan Boyd-Graber, Naoaki Okazaki
- Venue:
- Findings
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 3707–3723
- Language:
- URL:
- https://aclanthology.org/2023.findings-acl.228
- DOI:
- 10.18653/v1/2023.findings-acl.228
- Cite (ACL):
- Dominik Macháček, Peter Polák, Ondřej Bojar, and Raj Dabre. 2023. Robustness of Multi-Source MT to Transcription Errors. In Findings of the Association for Computational Linguistics: ACL 2023, pages 3707–3723, Toronto, Canada. Association for Computational Linguistics.
- Cite (Informal):
- Robustness of Multi-Source MT to Transcription Errors (Macháček et al., Findings 2023)
- PDF:
- https://preview.aclanthology.org/proper-vol2-ingestion/2023.findings-acl.228.pdf