label features 0.00335711
training data 0.00267074
different feature 0.002623726
same features 0.002475799
many features 0.002373845
label sequence 0.0023292829999999997
training corpus 0.002291727
bigram features 0.002251446
feature sets 0.002244064
feature mapping 0.002217618
explicit feature 0.0022112720000000002
times feature 0.002199947
loss function 0.00217975
feature space 0.002175814
training time 0.002164653
informative features 0.00216174
training set 0.002136593
hmm features 0.002128258
spelling features 0.002125206
overlapping features 0.002116188
model labels 0.002043891
previous label 0.001994147
current training 0.0019887439999999998
possible label 0.001970608
current label 0.0019572039999999997
training example 0.001950611
learning algorithm 0.001924225
sequence learning 0.001920783
current model 0.001910884
feature 0.0019036
markov model 0.001902447
single training 0.00189446
generative model 0.0018850190000000001
features 0.00186195
correct label 0.0018598249999999998
individual label 0.001853757
label sequences 0.001844685
previous word 0.001831507
training instances 0.001824168
training instance 0.001812404
label noise 0.00180167
current word 0.001794564
label bias 0.001793002
whole label 0.00178752
training patterns 0.001779912
different loss 0.001761326
test data 0.001750831
ual label 0.0017487009999999999
learning task 0.001744645
model conditions 0.0017278570000000002
popular model 0.0017237280000000001
loss values 0.001700366
function optimization 0.001646457
word con 0.001621506
objective function 0.0016083459999999999
learning domain 0.0016065429999999998
perceptron algorithm 0.0015530890000000001
sequence perceptron 0.001549647
training 0.0015267
learning tasks 0.001517411
loss functions 0.001513115
label 0.00149516
loss log 0.001491915
log loss 0.001491915
data sets 0.001484504
model 0.00144884
sequence set 0.001444016
loss case 0.00144318
discriminative learning 0.001441845
loss figure 0.001434162
different models 0.001422209
tial function 0.001408901
noisy data 0.001402547
machine learning 0.001400568
loss func 0.001383896
learning framework 0.001383671
exponential loss 0.0013741439999999999
above loss 0.001357666
tial learning 0.001357011
criminative learning 0.00134634
ent loss 0.001339269
optimization method 0.001322354
pointwise loss 0.0013090509999999999
rank loss 0.0013000849999999999
convex loss 0.0012989689999999999
logarithmic loss 0.001296689
four loss 0.00129441
different methods 0.001280385
same time 0.001251802
test time 0.0012447439999999999
different optimization 0.001228033
test accuracy 0.001198616
conditional probability 0.001190544
different objective 0.001189922
sequential algorithm 0.001185926
different regularization 0.001175702
other methods 0.00117316
descent method 0.00117171
markov models 0.00115569
different tasks 0.001150877
