language model 0.00375407
probability model 0.003531304
model corpus 0.0034888899999999997
method model 0.003439669
model training 0.0034317389999999996
backoff model 0.0034317379999999997
model pruning 0.0033776989999999996
model parameters 0.00324829
bigram model 0.003187518
markov model 0.003184914
order model 0.003165996
model estimation 0.003121159
typical model 0.003081966
gram model 0.0030801229999999997
entropy model 0.003072586
full model 0.003060105
trigram model 0.003058543
model automaton 0.0030578159999999997
exact model 0.003032469
format model 0.003030769
baseline model 0.0030279919999999997
mdc model 0.003018183
mixture model 0.003014678
model train 0.003006899
model quality 0.003002299
model changes 0.002998416
original model 0.002991756
model functions 0.0029831529999999997
guage model 0.002981214
acoustic model 0.0029744249999999997
model constraint 0.0029726509999999998
unpruned model 0.002967922
entire model 0.0029661939999999997
model 0.00273276
language models 0.00238827
word error 0.002309683
word vocabulary 0.002246557
long word 0.002233839
specific word 0.00221024
real word 0.002205231
subsequent word 0.0022031950000000002
models words 0.002164849
other models 0.002156716
models backoff 0.002065938
such models 0.001842638
gram models 0.001714323
statistical models 0.001682884
corpus distribution 0.001656192
katz models 0.001639204
input models 0.001633195
vious models 0.00163289
bell models 0.001609847
ney models 0.001606145
footprint models 0.0016034489999999998
unpruned models 0.0016021219999999999
other words 0.001587645
corpus probability 0.001554674
words training 0.001496868
state probability 0.001483077
training corpus 0.001455109
smoothing method 0.001407478
backoff smoothing 0.001399547
models 0.00136696
cal language 0.0013313970000000002
news language 0.001323284
natural language 0.0013170570000000002
distribution constraints 0.001282296
trained language 0.0012756100000000002
training data 0.001271889
language applications 0.0012668830000000001
other methods 0.001224082
ing method 0.001220275
words test 0.0011995909999999999
marginal distribution 0.001197207
large corpus 0.001191838
pruning algorithm 0.001191733
state probabilities 0.001183146
distribution calculations 0.001167635
perplexity results 0.001154043
stationary distribution 0.001153762
final words 0.001146642
distribution constraint 0.001139953
bigram state 0.001139291
smoothing methods 0.001134895
conditional probability 0.0011305830000000001
pus distribution 0.001130362
tionary distribution 0.001130362
same set 0.001129888
order state 0.001117769
probability estimate 0.001107825
training corpora 0.001107015
standard smoothing 0.001094621
input features 0.0010926389999999998
unigram state 0.001090712
constraints method 0.001089143
backoff history 0.001087443
probability calculation 0.001085738
corpus length 0.001083042
pruning methods 0.001079265
history state 0.001072998
