translation model 0.0052853
translation decoding 0.0044158800000000005
translation models 0.00431177
translation rules 0.00415741
multiple translation 0.003842947
same translation 0.003792274
single translation 0.0037582170000000003
node translation 0.003686286
translation hypergraph 0.003606184
translation level 0.003590864
machine translation 0.0035706920000000003
translation hyperedge 0.003543764
partial translation 0.0035204290000000003
individual translation 0.00348358
translation hypergraphs 0.003457435
packed translation 0.0034498750000000003
translation sys 0.003439357
arbitrary translation 0.003437495
consensus translation 0.0034323
candidate translation 0.003422037
translation hyper 0.003416541
ple translation 0.003407935
tiple translation 0.003406858
didate translation 0.003405547
language model 0.00322739
translation 0.0031302
model bleu 0.002856462
same model 0.002817174
method model 0.002661515
second model 0.002612173
first model 0.002602862
variable model 0.002508363
linear model 0.002501868
arbitrary model 0.002462395
able model 0.002446273
guage model 0.002443466
model searches 0.0024321200000000003
source language 0.00233855
language models 0.00225386
model 0.0021551
different models 0.002022825
source tree 0.002012144
other models 0.0019138610000000002
source sentence 0.001906847
multiple models 0.001894317
decoding node 0.0018417659999999999
source phrase 0.001823018
joint decoding 0.001814964
single models 0.001809587
ing decoding 0.00179799
training data 0.001768798
hypergraph decoding 0.001761664
source string 0.0017500509999999999
source side 0.001733717
decoding time 0.001726917
example source 0.00171474
source parse 0.001713792
language sentence 0.001712877
decoding table 0.001675069
string models 0.0016653610000000002
individual decoding 0.00163906
training corpus 0.0016168060000000001
source sen 0.001612229
various models 0.001603023
average decoding 0.001585405
scfg rule 0.001585022
hyperedge rule 0.0015753339999999998
ual decoding 0.001565589
rule coverage 0.001564256
source strings 0.001556355
source substrings 0.001549836
source substring 0.001542662
individual models 0.00153495
target tree 0.001526837
good rule 0.0015216739999999998
joint training 0.0014781479999999999
vidual models 0.0014641650000000001
ple models 0.0014593050000000002
tiple models 0.001458228
rule path 0.0014579699999999998
trary models 0.00145707
scfg rules 0.001450462
feature function 0.001446396
same target 0.001443027
refinement rule 0.001438485
rule cover 0.001438485
excellent rule 0.001438485
target sentence 0.00142154
other words 0.001413662
word penalty 0.001410948
language mod 0.001381691
word alignments 0.001377601
glue rules 0.001370852
sri language 0.001347966
ing nodes 0.001341119
work system 0.0013385110000000001
training table 0.001338253
target phrase 0.001337711
system combination 0.001336989
ing data 0.001332244
