language model 0.00446581
error model 0.004043627
source model 0.003849535
first model 0.0038275139999999997
phone model 0.003798714
model phone 0.003798714
correction model 0.003767373
new model 0.003695193
previous model 0.003672834
results model 0.003662379
basic model 0.003651036
initial model 0.0036324319999999997
linear model 0.003615511
gram model 0.003594712
single model 0.003591056
channel model 0.0035910509999999996
model ltr 0.003577419
ltr model 0.003577419
combined model 0.003567485
sequence model 0.003563129
moore model 0.0035422699999999997
nel model 0.00353799
model cmb 0.003537405
guage model 0.003536807
model substitutions 0.003534093
ror model 0.0035218489999999996
model fac 0.003521509
specialized model 0.003518874
model 0.00329345
word error 0.0031891669999999997
word pronunciation 0.0030846059999999997
source word 0.002995075
word accuracy 0.0028624109999999996
correct word 0.002825714
word pronunciations 0.002791176
word pairs 0.002748166
word bouncy 0.0027219749999999997
right word 0.0027200659999999997
probable word 0.002701407
word frequencies 0.002686735
word pronun 0.002682558
rect word 0.0026751179999999998
word pronuncia 0.002671826
acc word 0.002670913
word acc 0.002670913
intended word 0.0026646029999999998
source language 0.001728445
training data 0.001687504
possible words 0.0016043980000000002
correct words 0.001562154
error models 0.0015158559999999999
candidate words 0.001495585
data set 0.0014424519999999999
sequence language 0.001442039
fourgram language 0.001415584
rect words 0.001411558
training set 0.0014059720000000001
same data 0.001384156
same training 0.001347676
alignment algorithm 0.001303092
training algorithm 0.0012716
correct alignment 0.001243728
test set 0.0012413720000000001
correction models 0.001239602
probable translation 0.001218974
level alignment 0.0011777979999999999
words 0.00117543
language 0.00117236
same pronunciation 0.00116778
likely alignment 0.001165104
new error 0.0011519199999999999
previous models 0.001145063
different error 0.001140504
conversion data 0.001138606
training sample 0.001135298
probable alignment 0.001119421
training samples 0.001108058
large performance 0.001103503
pronunciation information 0.0011018640000000001
data sizes 0.001096041
sufficient data 0.001092164
cleaner data 0.0010875
act data 0.0010875
phone set 0.001085724
matic alignment 0.001082699
moore training 0.001074332
training individual 0.0010659250000000001
first context 0.0010589
modeling pronunciation 0.001057925
viterbi training 0.001054478
models ltr 0.001049648
log probability 0.001041414
separate error 0.00103682
good error 0.001034045
error rate 0.001031939
test time 0.001026743
following form 0.001026591
combined error 0.001024212
probability distribu 0.001021813
letter context 0.0010200959999999999
