Network-based speech-to-speech translation
Chiori Hori, Sakriani Sakti, Michael Paul, Noriyuki Kimura, Yutaka Ashikari, Ryosuke Isotani, Eiichiro Sumita, Satoshi Nakamura
Abstract
This demo shows the network-based speech-to-speech translation system. The system was designed to perform realtime, location-free, multi-party translation between speakers of different languages. The spoken language modules: automatic speech recognition (ASR), machine translation (MT), and text-to-speech synthesis (TTS), are connected through Web servers that can be accessed via client applications worldwide. In this demo, we will show the multiparty speech-to-speech translation of Japanese, Chinese, Indonesian, Vietnamese, and English, provided by the NICT server. These speech-to-speech modules have been developed by NICT as a part of A-STAR (Asian Speech Translation Advanced Research) consortium project1.- Anthology ID:
- 2009.iwslt-papers.6
- Volume:
- Proceedings of the 6th International Workshop on Spoken Language Translation: Papers
- Month:
- December 1-2
- Year:
- 2009
- Address:
- Tokyo, Japan
- Venue:
- IWSLT
- SIG:
- SIGSLT
- Publisher:
- Note:
- Pages:
- Language:
- URL:
- https://aclanthology.org/2009.iwslt-papers.6
- DOI:
- Cite (ACL):
- Chiori Hori, Sakriani Sakti, Michael Paul, Noriyuki Kimura, Yutaka Ashikari, Ryosuke Isotani, Eiichiro Sumita, and Satoshi Nakamura. 2009. Network-based speech-to-speech translation. In Proceedings of the 6th International Workshop on Spoken Language Translation: Papers, Tokyo, Japan.
- Cite (Informal):
- Network-based speech-to-speech translation (Hori et al., IWSLT 2009)
- PDF:
- https://preview.aclanthology.org/nschneid-patch-3/2009.iwslt-papers.6.pdf