Ryosuke Isotani


2014

pdf
Towards Multilingual Conversations in the Medical Domain: Development of Multilingual Medical Data and A Network-based ASR System
Sakriani Sakti | Keigo Kubo | Sho Matsumiya | Graham Neubig | Tomoki Toda | Satoshi Nakamura | Fumihiro Adachi | Ryosuke Isotani
Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14)

This paper outlines the recent development on multilingual medical data and multilingual speech recognition system for network-based speech-to-speech translation in the medical domain. The overall speech-to-speech translation (S2ST) system was designed to translate spoken utterances from a given source language into a target language in order to facilitate multilingual conversations and reduce the problems caused by language barriers in medical situations. Our final system utilizes a weighted finite-state transducers with n-gram language models. Currently, the system successfully covers three languages: Japanese, English, and Chinese. The difficulties involved in connecting Japanese, English and Chinese speech recognition systems through Web servers will be discussed, and the experimental results in simulated medical conversation will also be presented.

2013

pdf
Towards High-Reliability Speech Translation in the Medical Domain
Graham Neubig | Sakriani Sakti | Tomoki Toda | Satoshi Nakamura | Yuji Matsumoto | Ryosuke Isotani | Yukichi Ikeda
The First Workshop on Natural Language Processing for Medical and Healthcare Fields

2009

pdf
Construction of Chinese Segmented and POS-tagged Conversational Corpora and Their Evaluations on Spontaneous Speech Recognitions
Xinhui Hu | Ryosuke Isotani | Satoshi Nakamura
Proceedings of the 7th Workshop on Asian Language Resources (ALR7)

pdf
Network-based speech-to-speech translation
Chiori Hori | Sakriani Sakti | Michael Paul | Noriyuki Kimura | Yutaka Ashikari | Ryosuke Isotani | Eiichiro Sumita | Satoshi Nakamura
Proceedings of the 6th International Workshop on Spoken Language Translation: Papers

This demo shows the network-based speech-to-speech translation system. The system was designed to perform realtime, location-free, multi-party translation between speakers of different languages. The spoken language modules: automatic speech recognition (ASR), machine translation (MT), and text-to-speech synthesis (TTS), are connected through Web servers that can be accessed via client applications worldwide. In this demo, we will show the multiparty speech-to-speech translation of Japanese, Chinese, Indonesian, Vietnamese, and English, provided by the NICT server. These speech-to-speech modules have been developed by NICT as a part of A-STAR (Asian Speech Translation Advanced Research) consortium project1.

2005

pdf
Speech-Activated Text Retrieval System for Cellular Phones with Web Browsing Capability
Takahiro Ikeda | Shin-ya Ishikawa | Kiyokazu Miki | Fumihiro Adachi | Ryosuke Isotani | Kenji Satoh | Akitoshi Okumura
Proceedings of the 19th Pacific Asia Conference on Language, Information and Computation

2003

pdf
A Speech Translation System with Mobile Wireless Clients
Kiyoshi Yamabana | Ken Hanazawa | Ryosuke Isotani | Seiya Osada | Akitoshi Okumura | Takao Watanabe
The Companion Volume to the Proceedings of 41st Annual Meeting of the Association for Computational Linguistics

1994

pdf
Speech Recognition Using a Stochastic Language Model Integrating Local and Global Constraints
Ryosuke Isotani | Shoichi Matsunaga
Human Language Technology: Proceedings of a Workshop held at Plainsboro, New Jersey, March 8-11, 1994