Abstract
Recent advances in automatic quality estimation for machine translation have exclusively focused on written language, leaving the speech modality underexplored. In this work, we formulate the task of quality estimation for speech translation (SpeechQE), construct a benchmark, and evaluate a family of systems based on cascaded and end-to-end architectures. In this process, we introduce a novel end-to-end system leveraging pre-trained text LLM. Results suggest that end-to-end approaches are better suited to estimating the quality of direct speech translation than using quality estimation systems designed for text in cascaded systems. More broadly, we argue that quality estimation of speech translation needs to be studied as a separate problem from that of text, and release our [data and models](https://github.com/h-j-han/SpeechQE) to guide further research in this space.- Anthology ID:
- 2024.emnlp-main.1218
- Volume:
- Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
- Month:
- November
- Year:
- 2024
- Address:
- Miami, Florida, USA
- Editors:
- Yaser Al-Onaizan, Mohit Bansal, Yun-Nung Chen
- Venue:
- EMNLP
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 21852–21867
- Language:
- URL:
- https://aclanthology.org/2024.emnlp-main.1218
- DOI:
- 10.18653/v1/2024.emnlp-main.1218
- Cite (ACL):
- HyoJung Han, Kevin Duh, and Marine Carpuat. 2024. SpeechQE: Estimating the Quality of Direct Speech Translation. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, pages 21852–21867, Miami, Florida, USA. Association for Computational Linguistics.
- Cite (Informal):
- SpeechQE: Estimating the Quality of Direct Speech Translation (Han et al., EMNLP 2024)
- PDF:
- https://preview.aclanthology.org/dois-2013-emnlp/2024.emnlp-main.1218.pdf