2013
pdf
abs
The NICT ASR system for IWSLT 2013
Chien-Lin Huang
|
Paul R. Dixon
|
Shigeki Matsuda
|
Youzheng Wu
|
Xugang Lu
|
Masahiro Saiko
|
Chiori Hori
Proceedings of the 10th International Workshop on Spoken Language Translation: Evaluation Campaign
This study presents the NICT automatic speech recognition (ASR) system submitted for the IWSLT 2013 ASR evaluation. We apply two types of acoustic features and three types of acoustic models to the NICT ASR system. Our system is comprised of six subsystems with different acoustic features and models. This study reports the individual results and fusion of systems and highlights the improvements made by our proposed methods that include the automatic segmentation of audio data, language model adaptation, speaker adaptive training of deep neural network models, and the NICT SprinTra decoder. Our experimental results indicated that our proposed methods offer good performance improvements on lecture speech recognition tasks. Our results denoted a 13.5% word error rate on the IWSLT 2013 ASR English test data set.
2012
pdf
bib
abs
The NICT ASR system for IWSLT2012
Hitoshi Yamamoto
|
Youzheng Wu
|
Chien-Lin Huang
|
Xugang Lu
|
Paul R. Dixon
|
Shigeki Matsuda
|
Chiori Hori
|
Hideki Kashioka
Proceedings of the 9th International Workshop on Spoken Language Translation: Evaluation Campaign
This paper describes our automatic speech recognition (ASR) system for the IWSLT 2012 evaluation campaign. The target data of the campaign is selected from the TED talks, a collection of public speeches on a variety of topics spoken in English. Our ASR system is based on weighted finite-state transducers and exploits an combination of acoustic models for spontaneous speech, language models based on n-gram and factored recurrent neural network trained with effectively selected corpora, and unsupervised topic adaptation framework utilizing ASR results. Accordingly, the system achieved 10.6% and 12.0% word error rate for the tst2011 and tst2012 evaluation set, respectively.
pdf
abs
Factored recurrent neural network language model in TED lecture transcription
Youzheng Wu
|
Hitoshi Yamamoto
|
Xugang Lu
|
Shigeki Matsuda
|
Chiori Hori
|
Hideki Kashioka
Proceedings of the 9th International Workshop on Spoken Language Translation: Papers
In this study, we extend recurrent neural network-based language models (RNNLMs) by explicitly integrating morphological and syntactic factors (or features). Our proposed RNNLM is called a factored RNNLM that is expected to enhance RNNLMs. A number of experiments are carried out on top of state-of-the-art LVCSR system that show the factored RNNLM improves the performance measured by perplexity and word error rate. In the IWSLT TED test data sets, absolute word error rate reductions over RNNLM and n-gram LM are 0.4∼0.8 points.
pdf
Factored Language Model based on Recurrent Neural Network
Youzheng Wu
|
Xugang Lu
|
Hitoshi Yamamoto
|
Shigeki Matsuda
|
Chiori Hori
|
Hideki Kashioka
Proceedings of COLING 2012
2011
pdf
bib
abs
The NICT ASR system for IWSLT2011
Kazuhiko Abe
|
Youzheng Wu
|
Chien-lin Huang
|
Paul R. Dixon
|
Shigeki Matsuda
|
Chiori Hori
|
Hideki Kashioka
Proceedings of the 8th International Workshop on Spoken Language Translation: Evaluation Campaign
In this paper, we describe NICT’s participation in the IWSLT 2011 evaluation campaign for the ASR Track. To recognize spontaneous speech, we prepared an acoustic model trained by more spontaneous speech corpora and a language model constructed with text corpora distributed by the organizer. We built the multi-pass ASR system by adapting the acoustic and language models with previous ASR results. The target speech was selected from talks on the TED (Technology, Entertainment, Design) program. Here, a large reduction in word error rate was obtained by the speaker adaptation of the acoustic model with MLLR. Additional improvement was achieved not only by adaptation of the language model but also by parallel usage of the baseline and speaker-dependent acoustic models. Accordingly, the final WER was reduced by 30% from the baseline ASR for the distributed test set.
pdf
Similarity Based Language Model Construction for Voice Activated Open-Domain Question Answering
István Varga
|
Kiyonori Ohtake
|
Kentaro Torisawa
|
Stijn De Saeger
|
Teruhisa Misu
|
Shigeki Matsuda
|
Jun’ichi Kazama
Proceedings of 5th International Joint Conference on Natural Language Processing
2008
pdf
Multilingual Mobile-Phone Translation Services for World Travelers
Michael Paul
|
Hideo Okuma
|
Hirofumi Yamamoto
|
Eiichiro Sumita
|
Shigeki Matsuda
|
Tohru Shimizu
|
Satoshi Nakamura
Coling 2008: Companion volume: Demonstrations