sequence features 0.002656077
context features 0.002412106
transition features 0.002134507
training data 0.002133233
chain features 0.0021118730000000002
indicator features 0.002068749
features 0.0018427
training algorithm 0.001832223
large feature 0.001790186
training set 0.0017678760000000003
word error 0.001766661
training time 0.0017072650000000001
word accuracy 0.001681508
feature vector 0.001644768
sequence model 0.001635416
training process 0.001585665
data set 0.001578949
training methods 0.001562372
perceptron training 0.001556123
letter sequence 0.001545426
context data 0.001541559
input word 0.001510379
feature perceptron 0.001458703
feature weight 0.001454499
current feature 0.001444216
training part 0.001443229
mira training 0.0014400770000000001
discriminative training 0.001438908
feature sets 0.001437063
training example 0.0014233090000000002
phoneme sequence 0.0014207109999999998
native training 0.0014178320000000002
particular training 0.001410145
criminative training 0.001378382
data sets 0.001345556
feature representation 0.0013328869999999998
feature difference 0.0013225479999999998
sequence information 0.0013204430000000001
celex data 0.001318625
complete feature 0.0013051109999999999
letter context 0.001301455
available data 0.00128664
feature template 0.001283292
correct sequence 0.001267595
evaluation data 0.001231462
output sequence 0.001209911
language model 0.001207238
other results 0.0012065819999999999
unseen data 0.001204727
segmentation model 0.001203396
current model 0.0012025949999999999
alignment model 0.001197732
pronalsyl data 0.001195478
unseen words 0.001193616
cmudict data 0.001192722
search algorithm 0.00119125
model updates 0.00117574
hmm system 0.001170235
markov model 0.001169938
sequence fea 0.001167842
same learning 0.0011648589999999999
sequence pairs 0.001162506
training 0.00116108
other systems 0.001155189
phoneme error 0.001153395
ing method 0.001144909
system performance 0.001144798
test time 0.001115247
letter segmentation 0.001113406
sequence modeling 0.001112975
prediction model 0.001108783
system output 0.001101267
mira model 0.0011010360000000001
model func 0.001098324
chain model 0.001091212
large penalty 0.0010909000000000001
current system 0.0010852890000000001
other applications 0.001083952
large updates 0.001080227
loss function 0.001073549
phoneme accuracy 0.001068242
perceptron algorithm 0.001066186
learning framework 0.001063949
feature 0.00106366
perceptron learning 0.00106109
different context 0.001058514
actual sequence 0.001050914
single letter 0.00104889
hmm approach 0.001046347
sive sequence 0.0010349930000000001
tree system 0.0010266770000000001
input letter 0.001021828
other approaches 0.001021536
letter evidence 0.001017852
phoneme loss 0.001008388
context size 0.001003398
previous phoneme 0.001000852
letter substrings 9.96346E-4
system comparison 9.94535E-4
phoneme conversion 9.7901E-4
