translation rule 0.00473344
translation forest 0.0046053299999999995
translation rules 0.00448204
translation results 0.003658105
target translation 0.003558395
translation models 0.003465587
hyperedge translation 0.003431053
translation hyperedge 0.003431053
example translation 0.00342432
translation forests 0.003405653
machine translation 0.003399104
translation probabilities 0.003356861
translation hyperedges 0.003327841
chinese translation 0.00331699
corresponding translation 0.003287609
current translation 0.003282717
flat translation 0.003261004
translation approach 0.0032571090000000002
translation steps 0.003256259
complete translation 0.003226402
translation mistakes 0.0032241880000000002
source tree 0.003068037
parse tree 0.002926897
many tree 0.002909518
translation 0.00290683
forest pruning 0.0027714899999999997
target tree 0.0027199449999999997
forest decoding 0.0025802159999999998
parse forest 0.002557017
rule set 0.002536275
single tree 0.0025269489999999997
tree structures 0.002519767
tree fragments 0.002469705
tree substitution 0.002391252
same rule 0.002378929
new forest 0.0022727299999999997
rule extraction 0.002230266
tion rule 0.002227278
root node 0.002216395
forest decoder 0.002207369
node top 0.002189346
single forest 0.0021570689999999997
frontier nodes 0.002135863
consequent node 0.002135737
tail node 0.002132695
rule extrac 0.002132298
other nodes 0.002123191
packed forest 0.002083982
pruning algorithm 0.0020764299999999998
term forest 0.002058954
lexical rules 0.002024744
forest informally 0.002000634
forest curve 0.002000634
variable nodes 0.001925156
transducer rules 0.00190126
internal nodes 0.0018616
antecedent nodes 0.0018374939999999998
language model 0.001835163
interior nodes 0.001834255
descendent nodes 0.001834255
source sentence 0.001817708
training sentence 0.0017336909999999999
source language 0.0017187
forest 0.0016985
decoding time 0.001588757
parse trees 0.001579787
rules 0.00157521
cube pruning 0.0015751060000000002
large data 0.0015668510000000002
nodes 0.00153068
source side 0.001529508
training corpus 0.00151884
pruning threshold 0.001490895
treebank data 0.001484068
same training 0.001467959
gram model 0.001466388
simple model 0.001464071
pruning algo 0.001430896
sentence pairs 0.001427756
extraction algorithm 0.001407096
data preparation 0.001400437
word yuˇ 0.001386195
decoding step 0.0013829200000000002
parse hyperedge 0.00138274
pruning tech 0.001378901
multiple parse 0.0013771669999999999
test set 0.001375381
target language 0.0013706080000000002
parse forests 0.00135734
source languages 0.001351255
example sentence 0.001335541
lazy algorithm 0.001326759
conversion algorithm 0.001319423
rate training 0.001317585
single parse 0.001317086
parse hyperedges 0.001279528
decoding schemes 0.001269811
chinese parse 0.0012686770000000002
decoding algorithms 0.001262127
average decoding 0.001251055
