translation model 0.00336395
translation output 0.0027333130000000002
machine translation 0.002699628
translation baseline 0.002689281
translation task 0.002684185
english translation 0.002675064
translation performance 0.00267124
training data 0.002658661
translation probabili 0.002566155
test data 0.002561141
patent translation 0.002558045
word sequence 0.0025268540000000003
current word 0.002514998
translation laboratory 0.00251396
translation subtask 0.002487846
word penalty 0.0024567580000000003
tail word 0.002395903
corpus phrase 0.00239006
language phrase 0.00235055
target phrase 0.0023098959999999997
other words 0.0023093330000000002
source phrase 0.00229126
phrase table 0.002214105
translation 0.00220674
bilingual phrase 0.002113729
development data 0.002081698
patent data 0.002040705
unknown words 0.002027805
frequent words 0.002015984
data sets 0.001990259
ation data 0.001986365
language model 0.00198207
velopment data 0.001975147
training corpus 0.001833631
different smt 0.00181275
phrase wik 0.001807071
phrase includingwba 0.001807071
words 0.00171291
model scores 0.001670938
target language 0.001609066
source language 0.0015904299999999999
bilingual training 0.0015573000000000002
phrase 0.00152569
smt features 0.001505893
training sentences 0.001488695
smt system 0.0014876669999999998
small training 0.0014851159999999999
parallel training 0.001478885
same size 0.0014718539999999999
probability estimation 0.0014667859999999999
bleu score 0.0014517850000000001
guage model 0.001448352
english training 0.001437585
smt performance 0.001422664
large corpus 0.001408916
monolingual corpus 0.0014052230000000001
target lms 0.001395419
standard smt 0.001394233
bleu scores 0.001388218
smt experiments 0.001385865
smt research 0.001381114
parallel corpus 0.001373994
space language 0.0013607469999999998
current smt 0.001359342
smt decoding 0.001352354
ing corpus 0.001337216
same method 0.0013356280000000002
monolingual target 0.001325059
smt decoder 0.0013143599999999999
same time 0.001279134
smt systems 0.0012774919999999999
training cor 0.001276309
training cost 0.0012681699999999999
smt sys 0.001256848
extra corpus 0.00125421
allel training 0.001252549
smt fea 0.001239567
first layer 0.0012310049999999999
probability cal 0.0012303
language pair 0.001229893
ted corpus 0.0012243940000000002
vocabulary size 0.001224328
other methods 0.0012206909999999999
probability mass 0.0012184449999999999
accurate probability 0.00121356
significance test 0.001210132
average bleu 0.001200457
corpus the 0.0011990730000000002
tokenized test 0.001182518
sampling test 0.00117861
smoothing method 0.001174556
different domains 0.001173536
model 0.00115721
output layer 0.001153326
different lengths 0.001151394
similar size 0.001147491
phrases figure 0.0011411
different criteria 0.00113853
same weights 0.001133916
language part 0.00112937
