R. Gretter
2014
FBK @ IWSLT 2014 – ASR track
B. BabaAli
|
R. Serizel
|
S. Jalalvand
|
R. Gretter
|
D. Giuliani
Proceedings of the 11th International Workshop on Spoken Language Translation: Evaluation Campaign
This paper reports on the participation of FBK in the IWSLT 2014 evaluation campaign for Automatic Speech Recognition (ASR), which focused on the transcription of TED talks. The outputs of primary and contrastive systems were submitted for three languages, namely English, German and Italian. Most effort went into the development of the English transcription system. The primary system is based on the ROVER combination of the output of 5 transcription subsystems which are all based on the Deep Neural Network Hidden Markov Model (DNN-HMM) hybrid. Before combination, word lattices generated by each sub-system are rescored using an efficient interpolation of 4-gram and Recurrent Neural Network (RNN) language models. The primary system achieves a Word Error Rate (WER) of 14.7% and 11.4% on the 2013 and 2014 official IWSLT English test sets, respectively. The subspace Gaussian mixture model (SGMM) system developed for German achieves 39.5% WER on the 2014 IWSLT German test sets. For Italian, the primary transcription system was based on hidden Markov models and achieves 23.8% WER on the 2014 IWSLT Italian test set.
2012
FBK@IWSLT 2012 – ASR track
D. Falavigna
|
R. Gretter
|
F. Brugnara
|
D. Giuliani
Proceedings of the 9th International Workshop on Spoken Language Translation: Evaluation Campaign
This paper reports on the participation of FBK at the IWSLT2012 evaluation campaign on automatic speech recognition: namely in the English ASR track. Both primary and contrastive submissions have been sent for evaluation. The ASR system features acoustic models trained on a portion of the TED talk recordings that was automatically selected according to the fidelity of the provided transcriptions. Three decoding steps are performed interleaved by acoustic feature normalization and acoustic model adaptation. A final rescoring step, based on the usage of an interpolated language model, is applied to word graphs generated in the third decoding step. For the primary submission, language models entering the interpolation are trained on both out-of-domain and in-domain text data, instead the contrastive submission uses both ”general purpose” and auxiliary language models trained only on out-of-domain text data. Despite this fact, similar performance are obtained with the two submissions.
2011
FBK@IWSLT 2011
N. Ruiz
|
A. Bisazza
|
F. Brugnara
|
D. Falavigna
|
D. Giuliani
|
S. Jaber
|
R. Gretter
|
M. Federico
Proceedings of the 8th International Workshop on Spoken Language Translation: Evaluation Campaign
This paper reports on the participation of FBK at the IWSLT 2011 Evaluation: namely in the English ASR track, the Arabic-English MT track and the English-French MT and SLT tracks. Our ASR system features acoustic models trained on a portion of the TED talk recordings that was automatically selected according to the fidelity of the provided transcriptions. Three decoding steps are performed interleaved by acoustic feature normalization and acoustic model adaptation. Concerning the MT and SLT systems, besides language specific pre-processing and the automatic introduction of punctuation in the ASR output, two major improvements are reported over our last year baselines. First, we applied a fill-up method for phrase-table adaptation; second, we explored the use of hybrid class-based language models to better capture the language style of public speeches.
Search
Co-authors
- D. Giuliani 3
- F. Brugnara 2
- D. Falavigna 2
- N. Ruiz 1
- A. Bisazza 1
- show all...