compression model 0.003209297
language model 0.003173656
supervised model 0.002673436
source model 0.0026356559999999997
channel model 0.002614848
probabilistic model 0.00259773
model equation 0.002581795
model the 0.002570729
appropriate model 0.002551922
model our 0.002551796
guage model 0.00253382
utility model 0.002529964
model uses 0.0025251979999999998
model 0.00226862
sentence probability 0.00202224
sentence compression 0.001952707
compression probability 0.001950887
language models 0.001717153
original tree 0.001603608
tree structure 0.001588955
such rules 0.0015333830000000001
grammar rules 0.001520496
other models 0.001518482
packed tree 0.001493682
tree extractor 0.001491254
compression example 0.001481741
statistical sentence 0.0014395739999999999
many features 0.0014366679999999999
training corpus 0.001429157
expansion probability 0.001427733
short sentence 0.001427311
unsupervised compression 0.001424448
compression probabilities 0.001408464
small probability 0.00140237
training sentences 0.001400711
training data 0.001389735
good compression 0.001383854
same length 0.0013827269999999998
original sentence 0.001381958
long sentence 0.0013729509999999999
compression task 0.001370153
cases sentence 0.0013680239999999998
sentence pairs 0.0013575
compression rate 0.00133681
english sentences 0.001334542
other constraints 0.001309627
bigram probability 0.001295815
additional compression 0.001291987
general compression 0.001291636
sentence compres 0.001291239
compression problems 0.001275028
nel probability 0.0012708370000000001
probability cost 0.001267542
useful language 0.001216158
tence compression 0.001204962
compression rates 0.001204206
show compression 0.001197468
different con 0.001185758
nns features 0.001181268
rate grammar 0.001171736
machine translation 0.001165027
word deletion 0.0011623
charniak language 0.001161644
channel models 0.001158345
unsupervised rule 0.001157565
short sentences 0.0011543439999999999
many nodes 0.001151848
ing data 0.001145447
word bigram 0.001144444
free grammar 0.001123391
many compressions 0.001123295
joint rules 0.001122448
important words 0.001121835
original rules 0.001114821
similar rules 0.00111201
original sentences 0.001108991
parallel data 0.001108789
long sentences 0.001099984
grammar importance 0.0010896880000000001
special rules 0.0010818260000000001
bayes rule 0.0010767860000000001
same label 0.0010765689999999999
average grammar 0.001073594
development corpus 0.001062767
debugging features 0.001058097
joint rule 0.001051349
same pus 0.001050636
parse trees 0.001045317
parallel training 0.00104235
long rule 0.001034715
unsupervised version 0.001031765
extra rules 0.001025742
additional rule 0.001025104
sentence 0.00101203
probability 0.00101021
training pairs 0.001007118
nal sentences 0.001004025
main problem 9.98816E-4
example case 9.97969E-4
data buffering 9.97668E-4
