language model 0.00270638
training data 0.0022916309999999997
tbl rules 0.002158065
good rules 0.002151465
same data 0.002062712
transformation rules 0.002037671
lecture data 0.002022187
replacement rules 0.0020151699999999997
model adaptation 0.0020110090000000002
robust rules 0.002008972
lected rules 0.002005655
maining rules 0.002005655
language models 0.00195514
evaluation data 0.001935499
test data 0.001879035
baseline model 0.001825404
stochastic model 0.001805069
word error 0.00180504
rules 0.00177945
acoustic model 0.001762316
guage model 0.00175801
model the 0.0017559680000000001
word sequence 0.0017458740000000001
word sequences 0.001721753
ing word 0.001698006
contextual word 0.001660014
data sets 0.001646853
true word 0.0016435080000000001
average word 0.00164322
tbl rule 0.0016357849999999998
mismatched word 0.001606981
word lattices 0.001589474
word insertion 0.001585148
rule pruning 0.001573173
able data 0.001560591
available data 0.001555163
same topic 0.001545424
rule application 0.0015421529999999999
testing data 0.001536493
data tasr 0.00153631
rule selection 0.001533495
developmental data 0.001532112
context words 0.001527464
model 0.00152154
training set 0.001517365
transformation rule 0.001515391
lecture topic 0.001504899
rule discovery 0.0015030809999999999
rule scor 0.001487633
rule rbest 0.001482951
large training 0.0014662669999999998
natural language 0.0014540299999999998
language processing 0.0014473049999999999
tive language 0.001432851
matching language 0.0014303179999999999
language modelling 0.001425391
training size 0.0014252099999999999
icsiswb language 0.001423253
predictive language 0.001413309
same corpus 0.0014066410000000001
single words 0.0013816599999999998
topic adaptation 0.001279981
entire training 0.001279667
matching words 0.0012767479999999999
preceding words 0.001259461
limited training 0.001258052
text transcripts 0.001240457
same wer 0.001237958
training corpora 0.001233428
available training 0.001231194
training sample 0.0012227199999999998
tire training 0.0012116549999999999
training vocabu 0.001208102
evaluation test 0.001198934
correct text 0.001195775
language 0.00118484
same term 0.0011613460000000002
particular topic 0.001159578
perplexity models 0.0011556090000000001
lecture transcripts 0.001154627
ing text 0.001138503
same sentences 0.001137213
different instructor 0.001134146
different lan 0.001111676
test set 0.001104769
lecture tbl 0.001093002
same order 0.001083364
baseline models 0.001074164
different weeks 0.001073436
same instructor 0.0010429880000000001
asr system 0.001041639
relative wer 0.0010342559999999999
words 0.00103127
text area 0.001030589
topic interactive 0.001030553
text tasr 0.001028727
guage models 0.00100677
asr adaptation 9.987339999999998E-4
relative error 9.9653E-4
ing corpus 9.90015E-4
