model training 0.00377346
training data 0.00318532
feature function 0.002943111
feature weight 0.002646616
feature selection 0.0026317809999999997
language model 0.002609024
bigram feature 0.002607079
training set 0.0025839310000000002
base feature 0.002581184
model adaptation 0.0025784980000000002
sparse feature 0.0025777689999999997
feature space 0.002560798
background model 0.002522033
linear model 0.002511399
kth feature 0.002496791
model estimation 0.002453621
model size 0.002388865
adaptation training 0.002381638
current model 0.0023562880000000002
model parameters 0.0023438620000000004
other features 0.0023407009999999997
training method 0.002333356
various model 0.002322149
training error 0.002283314
candidate features 0.002279611
ground model 0.002277931
trigram model 0.002271764
complex model 0.002266686
test data 0.00225149
feature 0.00224838
gram model 0.002245108
training methods 0.002238072
model com 0.002237746
model complexities 0.0022330730000000003
model coefficients 0.0022330730000000003
fitted model 0.0022330730000000003
training sentences 0.0022289370000000003
bigram features 0.002214459
training sets 0.002181534
similar training 0.002170406
training sample 0.002162384
major features 0.0021578459999999997
discriminative training 0.002145735
effective features 0.002141072
headword features 0.002137504
tive training 0.0021304180000000002
training errors 0.002127629
gram features 0.002115708
native training 0.002077421
training procedure 0.002068476
training samples 0.002053619
training meth 0.002046689
criminative training 0.0020432230000000003
training sam 0.002036379
fitting training 0.002035892
adaptation data 0.001990358
model 0.00198516
other data 0.001881961
features 0.00185576
data sets 0.0017902539999999998
training 0.0017883
candidate word 0.001733201
word string 0.001687839
word bigram 0.001668049
unseen data 0.001655288
traditional word 0.0016467839999999999
encarta data 0.0016446299999999998
word strings 0.001599117
likely word 0.001598226
word trigram 0.001595954
word bigrams 0.001577415
word unigram 0.001563311
word tri 0.001559193
appropriate word 0.001557838
language models 0.001459129
adaptation test 0.001447808
linear models 0.001361504
parameter set 0.001316405
original algorithm 0.001293516
evaluation test 0.001260539
test sets 0.001247704
language text 0.001226338
different domain 0.001208501
different approaches 0.001197433
other words 0.001193985
test errors 0.001193799
error function 0.0011897449999999999
different range 0.001175869
perceptron algorithms 0.001175658
different train 0.001174695
blasso algorithm 0.001169773
regression models 0.001168609
linear language 0.001150103
fast algorithm 0.001149434
text input 0.001146138
gradient search 0.0011443620000000001
learning rate 0.001141315
similar results 0.0011237320000000001
complex models 0.001116791
set gen 0.001115234
