phrase translation 0.00680299
translation model 0.0051958
phrases translation 0.00447684
translation pair 0.004356055
translation system 0.004323644
bilingual phrase 0.004273733
target phrase 0.004220703
phrase pair 0.004216925
translation probability 0.00415859
translation score 0.004147979
translation models 0.004099747
phrase translations 0.00408155
machine translation 0.004023532
translation pairs 0.0040089190000000005
phrase table 0.0040075069999999996
source phrase 0.0039940439999999995
translation probabilities 0.003975236
phrase features 0.00396868
baseline phrase 0.003907954999999999
translation candidate 0.0038926760000000003
standard phrase 0.003871303
phrase pairs 0.0038697889999999998
translation tasks 0.003859164
continuous phrase 0.003831366
translation quality 0.0038308220000000002
phrase representation 0.003805977
traditional translation 0.003783598
translation units 0.003782207
translation candidates 0.0037800000000000004
wmt translation 0.0037710210000000003
reference translation 0.0037633090000000003
translation infor 0.0037583750000000004
complementary translation 0.003755417
gram translation 0.0037552080000000003
uous translation 0.0037548530000000003
translation relation 0.0037519470000000003
phrase similarity 0.003732498
phrase trans 0.0037323219999999997
phrase vec 0.00371495
phrase transla 0.003700333
traditional phrase 0.0036444679999999997
novel phrase 0.0036356749999999997
phrase mappings 0.0036344619999999998
modeling phrase 0.003631516
phrase fea 0.003630975
raw phrase 0.0036149479999999998
phrase representa 0.003614506
translation 0.00347106
word translations 0.0034513
word vector 0.003419314
phrase 0.00333193
training model 0.00325194
input word 0.0031011750000000003
word vec 0.0030847
training data 0.0030842
word sequence 0.0030131470000000003
training algorithm 0.0025463300000000003
language model 0.002477102
different data 0.002460894
bleu training 0.002364668
training method 0.002332381
data set 0.002207304
smt training 0.002199349
network model 0.002193934
training set 0.0021775040000000002
model parameters 0.002161542
linear model 0.002150966
topic model 0.002123629
tion model 0.00212137
parallel data 0.002105316
model projection 0.002089657
parallel training 0.002075516
weighting model 0.002058459
field model 0.002027353
guage model 0.0020207380000000002
ear model 0.002010267
model initialization 0.002010128
mrfp model 0.002005545
network training 0.001996394
training objective 0.001974717
ing data 0.00196419
data sets 0.001910212
target phrases 0.001894553
data sparseness 0.00189427
discriminative training 0.001876619
wmt data 0.001856961
newswire data 0.001855269
entire training 0.0018474910000000002
batch training 0.00184674
dev data 0.001846168
data sparsity 0.00184581
clickthrough data 0.001838026
severe data 0.001838026
lingual training 0.001826055
target sentence 0.001824153
training samples 0.001821018
sentence pair 0.001820375
cptm training 0.001817862
training cptm 0.001817862
training criterion 0.001815987
