syntactic word 0.003110116
word sequence 0.003041871
possible word 0.0029992179999999997
relative word 0.00299805
word pairs 0.002976501
local word 0.0029649249999999998
significant word 0.0029598579999999997
word error 0.002936243
word history 0.0029253409999999997
word dependencies 0.002923457
word peace 0.00290202
word mass 0.002900911
enough word 0.0028961399999999997
word usage 0.0028944409999999998
word sequences 0.0028936929999999997
word predictions 0.002886258
tant word 0.002886258
word politics 0.002886258
language model 0.002753279
model probability 0.002570206
model ing 0.002420002
model estimation 0.00227559
trigram model 0.002275267
bigram model 0.0022597249999999998
model interpolation 0.002241608
model probabilities 0.002199111
level model 0.002084927
baseline model 0.002074813
further model 0.002070184
overall model 0.002070163
model computation 0.0020585869999999997
joint model 0.0020581609999999998
gram model 0.0020520259999999998
detection model 0.0020420449999999997
core model 0.002041097
mixture model 0.00204041
model proba 0.002030621
model construction 0.002028669
guage model 0.0020251229999999998
model probabili 0.002023419
cache model 0.002021718
model mea 0.002020541
model combination 0.0020166109999999998
model identi 0.002015412
model combi 0.002015412
gnage model 0.002015412
model specifications 0.002015412
other words 0.001793115
model 0.00179278
function words 0.0016983810000000001
previous words 0.001586856
language models 0.001541241
content words 0.001512039
language perplexity 0.0014955699999999999
certain words 0.001476357
training data 0.001468204
trigram language 0.0014429859999999998
eral words 0.0014334789999999999
bigram language 0.001427444
language modeling 0.0014172149999999999
smooth language 0.001290134
conditional language 0.001263248
statistical language 0.001263225
probability estimation 0.001260236
trigram probability 0.001259913
common language 0.001255663
bigram probability 0.001244371
language mode 0.00124185
probability estimate 0.001239485
topic distribution 0.001236058
sufficient data 0.001229433
topic models 0.001227863
gram language 0.001219745
turing language 0.001218863
words 0.00120868
language mod 0.001205311
specific language 0.001204564
accurate language 0.001203979
enough data 0.0012028870000000001
ngram language 0.001202226
sensitive language 0.001201243
contribution language 0.001199227
final language 0.00119499
data fragmentation 0.001193302
igram language 0.001191284
cache language 0.001189437
speech corpus 0.001184272
specialized language 0.001183941
attached language 0.001182937
sequence probability 0.001155687
lexical probability 0.001153685
ing function 0.0011169230000000001
news corpus 0.001083944
trigger models 0.00108207
news topic 0.001080731
conditional probability 0.001080175
probability distributions 0.001068601
proper probability 0.001055266
tree structure 0.00105162
ent probability 0.001038151
