training data 0.00372032
error training 0.003314555
training set 0.0032671749999999998
other training 0.003182806
training corpus 0.0031659569999999996
training approach 0.003161724
supervised training 0.003119776
training sentences 0.0030810969999999997
risk training 0.003069008
translation model 0.00297007
training methods 0.0029168759999999997
training criterion 0.00288554
training criteria 0.0028329009999999996
training examples 0.002824825
above training 0.0028071629999999997
training procedures 0.0027968259999999997
separate training 0.002782428
bleu translation 0.002572489
training 0.00246601
loss function 0.002244793
machine translation 0.002147514
translation metrics 0.002115797
translation experiments 0.00209004
translation systems 0.002078944
reference translation 0.002071316
translation metric 0.002070256
test data 0.0020331389999999998
translation sys 0.002017835
other model 0.002009186
translation community 0.002005973
candidate translation 0.001983625
language model 0.0019339399999999999
word error 0.001914045
model parameters 0.001872818
linear model 0.001819794
linear loss 0.001805314
sentence test 0.001772909
english word 0.001749345
current model 0.001724536
direct loss 0.001685966
translation 0.00167768
expected loss 0.0016766069999999999
loss functions 0.001667009
development data 0.0016493599999999999
loss statistics 0.0016482609999999998
nonlinear loss 0.001627031
bleu evaluation 0.001626358
other features 0.001622368
possible loss 0.001609641
total loss 0.001606108
overall loss 0.0016011159999999999
template model 0.001584501
test set 0.001579994
combined model 0.001575658
bleu score 0.001572429
loss surface 0.001567637
loss hypotheses 0.0015650809999999999
language models 0.001560987
word speech 0.001559744
loss minimiza 0.001555503
bitrary loss 0.001555503
objective function 0.001545818
heldout data 0.00154408
other words 0.0015421509999999999
such models 0.0015323060000000002
word count 0.00147935
test corpus 0.001478776
log bleu 0.001476467
minimum error 0.001458964
different optimization 0.001457401
dependency parsing 0.001448278
linear models 0.001446841
error objective 0.00142748
sentence pairs 0.001419289
convex function 0.00141294
word penalty 0.0014117980000000001
test sentences 0.0013939159999999998
average word 0.001393561
development sentence 0.0013891300000000001
same error 0.001381499
english english 0.00136769
ing bleu 0.001353593
target phrase 0.0013476130000000001
labeled dependency 0.001340391
target words 0.001336111
other work 0.001335231
viterbi function 0.001332143
true bleu 0.001313864
source phrase 0.001305955
bleu scores 0.0013016339999999999
risk approach 0.001298712
other parameters 0.001297224
source words 0.0012944529999999999
mass function 0.001293667
large corpus 0.001292475
model 0.00129239
bleu metric 0.001287385
generative models 0.00128278
ith sentence 0.001276157
sentence indepen 0.001272141
