Abstract
How to find proper moments to generate partial sentence translation given a streaming speech input? Existing approaches waiting-and-translating for a fixed duration often break the acoustic units in speech, since the boundaries between acoustic units in speech are not even. In this paper, we propose MoSST, a simple yet effective method for translating streaming speech content. Given a usually long speech sequence, we develop an efficient monotonic segmentation module inside an encoder-decoder model to accumulate acoustic information incrementally and detect proper speech unit boundaries for the input in speech translation task. Experiments on multiple translation directions of the MuST-C dataset show that outperforms existing methods and achieves the best trade-off between translation quality (BLEU) and latency. Our code is available at https://github.com/dqqcasia/mosst.- Anthology ID:
- 2022.acl-long.50
- Volume:
- Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
- Month:
- May
- Year:
- 2022
- Address:
- Dublin, Ireland
- Editors:
- Smaranda Muresan, Preslav Nakov, Aline Villavicencio
- Venue:
- ACL
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 680–694
- Language:
- URL:
- https://aclanthology.org/2022.acl-long.50
- DOI:
- 10.18653/v1/2022.acl-long.50
- Cite (ACL):
- Qian Dong, Yaoming Zhu, Mingxuan Wang, and Lei Li. 2022. Learning When to Translate for Streaming Speech. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 680–694, Dublin, Ireland. Association for Computational Linguistics.
- Cite (Informal):
- Learning When to Translate for Streaming Speech (Dong et al., ACL 2022)
- PDF:
- https://preview.aclanthology.org/dois-2013-emnlp/2022.acl-long.50.pdf
- Code
- dqqcasia/mosst
- Data
- MuST-C