compression model 0.0035263300000000003
language model 0.0029726500000000003
sentence compression 0.00281678
model training 0.002717976
discriminative model 0.0023914310000000003
compression models 0.002386885
model scores 0.00231825
joint model 0.002297121
transduction model 0.00220158
ngram model 0.002199892
tion model 0.00218765
guage model 0.0021741020000000002
criminative model 0.0021634370000000003
inative model 0.002160341
compression results 0.002053158
target compression 0.002044538
discriminative compression 0.002041481
similar compression 0.002024639
compression task 0.001985894
model 0.00193814
candidate compression 0.001880257
final compression 0.0018601339999999998
ngram compression 0.001849942
smooth compression 0.001820073
compression ratios 0.001809785
close compression 0.001806952
corpus training 0.0016605880000000002
training corpus 0.0016605880000000002
syntactic features 0.0016409749999999998
discriminative features 0.001624001
rule features 0.0015908089999999999
compression 0.00158819
original sentence 0.001586937
local features 0.001537183
tree models 0.001523509
ngram features 0.001432462
rich features 0.001416432
trigram language 0.001375553
language generation 0.001328232
language processing 0.001318365
natural language 0.001314491
ngram language 0.001296262
discriminative models 0.001251986
parallel corpus 0.001249041
probability score 0.001248893
such models 0.001245485
sentence 0.00122859
target tree 0.001181162
important information 0.001174319
features 0.00117071
source tree 0.001164717
joint models 0.001157676
other systems 0.001145692
machine translation 0.001134389
reuters corpus 0.001100193
tree nodes 0.001087856
structured models 0.001086361
training process 0.001068992
simple tree 0.001067785
transduction models 0.001062135
training the 0.001061768
the training 0.001061768
ngram models 0.001060447
pure tree 0.0010565029999999999
training development 0.001051484
scale data 0.001050508
set performance 0.001048038
ing results 0.001037152
language 0.00103451
inative models 0.0010208959999999999
dependent models 0.00101919
information extraction 0.001018986
erative models 0.001017491
beam search 0.00100236
syntax tree 0.001000196
synchronous tree 9.98299E-4
data preparation 9.93279E-4
similar time 9.89856E-4
tree transduction 9.88254E-4
structural information 9.86687E-4
decision tree 9.7782E-4
ical score 9.729859999999999E-4
above score 9.710089999999999E-4
score definition 9.673549999999999E-4
decoding method 9.66541E-4
tree substitution 9.579419999999999E-4
search decoding 9.547310000000001E-4
new decoding 9.48688E-4
tree transduc 9.45434E-4
fect tree 9.43643E-4
other hand 9.348550000000001E-4
synchronous grammar 9.24963E-4
original beam 9.131740000000001E-4
new method 9.00833E-4
possible compressions 8.995349999999999E-4
log probability 8.97023E-4
words 8.9009E-4
human rating 8.8583E-4
corpus 8.80752E-4
human rat 8.74407E-4
