Abstract
In spoken language translation, it is crucial that an automatic speech recognition (ASR) system produces outputs that can be adequately translated by a statistical machine translation (SMT) system. While word error rate (WER) is the standard metric of ASR quality, the assumption that each ASR error type is weighted equally is violated in a SMT system that relies on structured input. In this paper, we outline a statistical framework for analyzing the impact of specific ASR error types on translation quality in a speech translation pipeline. Our approach is based on linear mixed-effects models, which allow the analysis of ASR errors on a translation quality metric. The mixed-effects models take into account the variability of ASR systems and the difficulty of each speech utterance being translated in a specific experimental setting. We use mixed-effects models to verify that the ASR errors that compose the WER metric do not contribute equally to translation quality and that interactions exist between ASR errors that cumulatively affect a SMT system’s ability to translate an utterance. Our experiments are carried out on the English to French language pair using eight ASR systems and seven post-edited machine translation references from the IWSLT 2013 evaluation campaign. We report significant findings that demonstrate differences in the contributions of specific ASR error types toward speech translation quality and suggest further error types that may contribute to translation difficulty.- Anthology ID:
- 2014.amta-researchers.20
- Volume:
- Proceedings of the 11th Conference of the Association for Machine Translation in the Americas: MT Researchers Track
- Month:
- October 22-26
- Year:
- 2014
- Address:
- Vancouver, Canada
- Editors:
- Yaser Al-Onaizan, Michel Simard
- Venue:
- AMTA
- SIG:
- Publisher:
- Association for Machine Translation in the Americas
- Note:
- Pages:
- 261–274
- Language:
- URL:
- https://aclanthology.org/2014.amta-researchers.20
- DOI:
- Cite (ACL):
- Nicholas Ruiz and Marcello Federico. 2014. Assessing the impact of speech recognition errors on machine translation quality. In Proceedings of the 11th Conference of the Association for Machine Translation in the Americas: MT Researchers Track, pages 261–274, Vancouver, Canada. Association for Machine Translation in the Americas.
- Cite (Informal):
- Assessing the impact of speech recognition errors on machine translation quality (Ruiz & Federico, AMTA 2014)
- PDF:
- https://preview.aclanthology.org/add_acl24_videos/2014.amta-researchers.20.pdf