language model 0.00228358
language models 0.0019261109999999999
model training 0.0018784840000000001
different language 0.001874595
aac language 0.0017653869999999998
different data 0.001671561
different models 0.001661046
new model 0.001654399
such language 0.0015949329999999998
model parameters 0.001582685
mixture model 0.001575919
aac data 0.0015623529999999998
training data 0.0015315300000000001
model order 0.001527986
training models 0.0015210150000000001
training words 0.001516521
user data 0.001473805
different training 0.001469499
switchboard model 0.0014655039999999999
gram model 0.00144348
model experiment 0.001441777
mixture language 0.001431999
guage model 0.001430487
model adaptation 0.001428062
model improvement 0.001426927
other words 0.0013912220000000001
different test 0.001368831
twitter data 0.001361452
language mod 0.0013343069999999999
language modeling 0.001329825
aac users 0.001328644
new word 0.001322164
first word 0.00131215
aac user 0.001302566
statistical language 0.001301628
large word 0.001285718
turktrain language 0.001281244
all language 0.0012685279999999999
language skills 0.0012675289999999999
language impairments 0.0012675289999999999
word prediction 0.0012623460000000001
aac test 0.001259623
aac text 0.0012531180000000001
other aac 0.0012349919999999999
turk data 0.001226491
such aac 0.00122066
mixture models 0.00121845
model 0.00121375
good data 0.001199527
word list 0.0011828020000000002
current word 0.001180625
training set 0.001168991
data selection 0.00116797
word predictions 0.001165173
word error 0.001162537
twitter training 0.0011593900000000002
users type 0.001148931
many aac 0.001145095
different types 0.0011389500000000001
popular word 0.001129149
blog data 0.001118859
word lists 0.001118808
baseline models 0.001118532
minimum word 0.001111105
usenet data 0.001107635
unique words 0.001105894
data representative 0.001103905
data collection 0.001102824
unknown word 0.001100993
word vocab 0.001098207
other text 0.001096996
crowdsourced data 0.001090421
train data 0.001090279
data sources 0.001087172
gram models 0.001086011
word vocabu 0.001085157
word completions 0.00107931
newswire data 0.001074995
guage models 0.001073018
different techniques 0.0010701180000000001
language 0.00106983
web data 0.001069724
models sizes 0.001069591
aac communication 0.001069412
large training 0.001068937
data beukelman 0.001066387
surrogate data 0.00106494
data cleaning 0.001064062
training sentence 0.0010588960000000001
turk aac 0.001055252
ger models 0.001053959
rare words 0.001050268
filler words 0.001050268
aac communications 0.001039522
different thresholds 0.001036189
different sources 0.001025141
different selec 0.001023075
training size 0.001014521
full training 0.0010011450000000002
small training 9.98559E-4
