model translation 0.00465466
word translation 0.00370937
translation rule 0.00363746
translation rules 0.003304421
translation grammar 0.003017873
translation probability 0.0030077
discriminative model 0.0028402460000000003
generative model 0.002791593
new translation 0.002716105
machine translation 0.002680071
translation systems 0.002664703
generic model 0.0026610590000000003
hidden model 0.002660967
statistical model 0.0026541390000000002
underlying model 0.002631111
model let 0.002620779
compact model 0.002613855
posterior model 0.002606331
den model 0.002584953
pact model 0.002584953
gual translation 0.002566383
erate translation 0.002564715
exhaustive translation 0.002558654
model 0.00234159
translation 0.00231307
target word 0.0022597850000000003
word alignment 0.002140447
sparse features 0.002079836
rule probability 0.00201902
class features 0.002015856
features type 0.002012522
posterior features 0.002008551
dense features 0.002004612
active features 0.00200112
comprehensive features 0.001988334
features now 0.001988334
rule set 0.001889309
large rule 0.001878229
word alignments 0.001833076
baseline rule 0.0017801
features 0.00174381
rule modeling 0.001728252
special word 0.001663654
word align 0.0016565710000000001
training data 0.001655208
word strings 0.001646748
generative feature 0.0016459530000000001
different rules 0.001633172
target words 0.001632718
tion rule 0.0016270149999999999
rich rule 0.0016215469999999999
similar source 0.001605787
feature space 0.001603353
source side 0.001582774
feature value 0.001545862
feature frequency 0.0015454450000000001
feature values 0.001543892
sparse feature 0.001531976
feature optimization 0.0015085620000000002
class feature 0.001467996
dense feature 0.001456752
source tokens 0.0014545449999999998
training set 0.001435944
view rules 0.0013162389999999999
lation rules 0.0013133899999999998
target side 0.001238959
source 0.0012073
feature 0.00119595
ing models 0.001167521
target sides 0.0011398089999999999
training date 0.001134414
large set 0.001118758
lel training 0.001116609
frequent words 0.001111932
target tokens 0.0011107299999999999
lexical models 0.001061034
grammar generation 0.001057423
hidden models 0.00104763
system bleu 0.0010432409999999999
parallel sentences 0.001032719
uation data 0.00102848
rich models 0.00102541
continuous words 0.0010144
baseline system 0.001000411
tion probability 9.97255E-4
case study 9.94764E-4
rules 9.91351E-4
guistic grammar 9.83106E-4
adjoining grammar 9.70336E-4
modeling parallel 9.648910000000001E-4
meta grammar 9.60373E-4
posterior probability 9.59371E-4
future work 9.58662E-4
phrase transla 9.575549999999999E-4
bleu score 9.41757E-4
pioneer work 9.14558E-4
vious work 9.14558E-4
possible derivations 9.090319999999999E-4
various derivations 8.722949999999999E-4
training 8.71025E-4
