translation model 0.0030817600000000002
alignment model 0.00301322
language model 0.002694557
word alignment 0.00233883
weighting model 0.002234676
model scores 0.002223742
logistic model 0.002193852
distortion model 0.002174189
mixture model 0.002121319
model perplexities 0.0020870380000000003
guage model 0.0020865890000000002
equivalant model 0.002085688
translation models 0.001953102
weighting features 0.001900766
target word 0.001851616
model 0.00184357
language training 0.0018267169999999998
similarity features 0.001808344
simple features 0.001796001
split features 0.001763693
boolean features 0.0017531889999999998
splitting features 0.001752034
complex features 0.001752034
tity features 0.001752034
portant features 0.001752034
feature weights 0.001735983
source phrase 0.001717217
machine translation 0.0016482209999999998
target phrase 0.0016168900000000002
training set 0.001606389
training data 0.0015712859999999999
language models 0.001565899
translation experiments 0.0015490299999999999
svm feature 0.0015233109999999998
training corpus 0.001514182
glish translation 0.001510464
features 0.00150966
phrase pairs 0.001498354
phrase weights 0.001493337
word frequencies 0.0014851419999999998
phrase table 0.0014765770000000002
phrase pair 0.0014758050000000002
alignment procedure 0.0014738169999999999
feature subsets 0.001459005
target words 0.001452606
word values 0.001441496
word count 0.0014113139999999999
lexical phrase 0.001355265
source sentences 0.001337834
weighting phrase 0.00132556
out training 0.001319371
training corpora 0.001306054
parallel training 0.001293214
phrase probabilities 0.0012921640000000002
empirical phrase 0.001285085
training procedure 0.001279897
out phrase 0.001278095
training material 0.001263478
emea training 0.001249079
multinomial phrase 0.00123836
translation 0.00123819
training criterion 0.001226621
phrase extraction 0.001225842
identical training 0.001225595
self training 0.001223042
ditional phrase 0.001222589
heterogeneous training 0.001217905
training proce 0.001217905
standard method 0.0012101590000000001
individual phrase 0.0011832000000000001
chinese models 0.00117958
feature 0.0011771
phrase tables 0.001176951
sentence pairs 0.0011767190000000001
alignment 0.00116965
general language 0.001157843
corpus sentence 0.0011512710000000002
small set 0.001131522
other work 0.001128791
weighting models 0.0011060179999999998
standard smt 0.0010961780000000002
various models 0.001078003
minimum source 0.0010765
evaluation set 0.001061066
out models 0.0010585529999999998
mert algorithm 0.001055903
individual source 0.001031509
source lms 0.001031157
logistic function 0.001029581
target domain 0.001026837
results table 0.0010253950000000001
multinomial models 0.001018818
selection algorithm 0.0010110520000000001
different fea 0.0010033099999999999
mixture models 9.92661E-4
function exp 9.924489999999998E-4
ibm models 9.91385E-4
previous work 9.761139999999999E-4
training 9.7573E-4
different categories 9.756179999999999E-4
