model pruning 0.00360347
probability pruning 0.002788065
language model 0.0027020309999999997
model probability 0.002609395
model training 0.002576612
pruning bigram 0.002567298
bigram pruning 0.002567298
pruning threshold 0.002547619
pruning method 0.002545096
several pruning 0.002425081
pruning algorithm 0.0024109889999999997
bigram model 0.002388628
different model 0.002378651
new pruning 0.002346558
word error 0.002338683
baseline pruning 0.002337962
pruning criteria 0.00232904
pruning criterion 0.002304423
cutoff pruning 0.0022717799999999997
model perplexity 0.002232975
possible word 0.002223774
language models 0.002219951
model size 0.002214081
pruning techniques 0.0022129289999999998
correct word 0.002201257
thresholding pruning 0.002178471
combined pruning 0.002167728
target word 0.002148862
word pairs 0.002142269
next word 0.0021314949999999997
relative model 0.00209422
original model 0.00207803
specific model 0.002072758
word string 0.0020686529999999997
word pair 0.0020451519999999997
model sizes 0.002019837
word strings 0.002017343
likely word 0.001998957
regression model 0.001986457
bigram models 0.001906548
pruning 0.00189107
training data 0.0017651580000000002
other language 0.001762301
model 0.0017124
same training 0.0015072430000000001
test data 0.001457178
probability distribution 0.001408184
different threshold 0.0013227999999999998
log probability 0.001306593
conditional probability 0.001282457
probability estimate 0.00128052
alternative words 0.001257438
bigram probabilities 0.001255151
prob probability 0.001242797
entropy figure 0.0012306639999999998
models 0.00123032
large text 0.001198509
sample data 0.001181544
data sparseness 0.00117934
bigram size 0.001177909
performance loss 0.0011768249999999998
probability lfprobability 0.001155571
loss function 0.001122647
backoff weights 0.001114218
other measures 0.001111581
different criteria 0.0011042209999999998
large number 0.001102988
large search 0.001089338
test text 0.001087565
test set 0.001086621
optimal threshold 0.001074934
threshold pairs 0.0010693880000000001
backoff scheme 0.001057362
threshold settings 0.00104978
memory constraints 0.001045228
new entropy 0.001038184
cutoff method 0.001034736
balanced corpus 0.001034473
explicit bigram 0.00103243
large vocabulary 0.0010294850000000001
cutoff figure 0.0010286779999999999
error rate 0.001017698
novel method 0.00101761
bigram estimates 0.001010008
count cutoff 9.98725E-4
criterion performance 9.9629E-4
test results 9.92854E-4
function learning 9.91388E-4
unseen bigram 9.908859999999999E-4
language 9.89631E-4
criteria table 9.83835E-4
practical use 9.73662E-4
threshold pair 9.72271E-4
small set 9.70755E-4
relative entropy 9.645159999999999E-4
conditional probabilities 9.643849999999999E-4
same cer 9.62364E-4
mean entropy 9.559849999999999E-4
corresponding loss 9.55415E-4
bigram numbers 9.54079E-4
