translation model 0.00458437
translation phrase 0.00431547
target translation 0.0035441369999999997
word alignment 0.0035308300000000004
translation results 0.0035051039999999998
translation pair 0.0034573829999999996
translation information 0.003348824
words model 0.00330005
target word 0.003288927
translation score 0.003262939
same translation 0.003228042
other translation 0.0032145129999999996
word pair 0.003202173
translation system 0.003198407
translation task 0.0031973319999999998
model training 0.00319253
machine translation 0.0031891899999999997
source word 0.0031774100000000003
translation pairs 0.00315759
translation candidate 0.003148297
bilingual word 0.003134327
translation performance 0.003112837
translation table 0.003108644
translation process 0.00309509
lexical translation 0.003056565
input word 0.003052941
good translation 0.003025397
network word 0.0030123600000000004
translation probabilities 0.003012253
oracle translation 0.0030108979999999997
translation con 0.003007611
translation confidence 0.003006492
alignment model 0.0029974700000000003
frequent translation 0.0029926859999999996
final translation 0.0029826799999999997
translation candidates 0.002973727
translation result 0.0029556139999999996
glish translation 0.002954351
translation correspondence 0.0029521969999999997
monolingual word 0.0029519430000000003
matched translation 0.0029381389999999998
translation relationship 0.0029381389999999998
translation rela 0.0029381389999999998
word embedding 0.002904765
ing word 0.002891531
language model 0.002874782
next word 0.0027896500000000003
word count 0.0027878850000000004
word level 0.002737642
initial word 0.002732261
word embeddings 0.00271652
gual word 0.0027047750000000004
word penalty 0.002700748
translation 0.00268647
target phrase 0.002486667
model score 0.002474369
phrase pair 0.002399913
model parameters 0.0023991940000000003
source phrase 0.00237515
reordering model 0.002290927
target words 0.002259817
distortion model 0.002241145
conventional model 0.002194719
feature training 0.002185389
training data 0.002181732
our model 0.002176414
markov model 0.002167436
guage model 0.002165642
trained model 0.002155273
model parame 0.002149468
phrase embedding 0.002102505
phrase pairs 0.00210012
chinese words 0.001975747
based phrase 0.00195657
training method 0.001938799
phrase level 0.001935382
smt training 0.0019172859999999998
parent phrase 0.001906072
training set 0.0019038459999999998
model 0.0018979
merging phrase 0.00188887
plore phrase 0.0018805290000000001
construct phrase 0.0018805290000000001
chinese training 0.001868227
local training 0.0018581819999999999
network features 0.00184352
training approach 0.001836833
training corpus 0.001804314
training methods 0.001786165
english training 0.001759096
training size 0.001714691
boundary words 0.001688467
input layer 0.001683271
glish words 0.001670031
surrounding words 0.001658074
global training 0.001652291
phrase 0.001629
efficient training 0.001621258
global features 0.001620081
supervised training 0.001616069
