model training 0.0029265700000000003
conditional model 0.0029216100000000003
generative model 0.0024783900000000005
statistical model 0.002444162
model framework 0.002412523
model structure 0.002384722
prior model 0.002376731
model regularization 0.0023497770000000004
composite model 0.002321062
model initialization 0.002319034
model topology 0.0023087800000000003
original model 0.0023031400000000004
lexicalization model 0.002299573
hcrf model 0.0022984330000000003
model 0.00208782
word string 0.0019876679999999997
word sequence 0.001904895
same features 0.001899287
lexical features 0.001852964
feature set 0.0017634460000000001
word sequences 0.001752071
preamble word 0.0017312669999999999
transition features 0.001712542
additional features 0.0017089219999999999
context features 0.001692442
unigram features 0.001675963
covered word 0.001675071
new features 0.001674153
prior features 0.001663751
coverage features 0.001645321
indicator features 0.0016218159999999999
bigram features 0.001600362
gram features 0.001596753
homogeneous features 0.001585963
training data 0.001565968
additional feature 0.0015035220000000002
second feature 0.0015005390000000002
context feature 0.001487042
conditional probability 0.001482181
unigram feature 0.001470563
new feature 0.0014687530000000002
third feature 0.0014643240000000001
feature counts 0.0014427790000000002
feature functions 0.0014408250000000002
coverage feature 0.001439921
feature vector 0.0014390790000000002
feature sets 0.0014296150000000002
boundary feature 0.0014094
conditional models 0.001400808
expected feature 0.001386672
different training 0.001382935
features 0.00137484
conditional distribution 0.0013577
crf training 0.001346258
preamble words 0.0012630929999999999
conditional modeling 0.0012419150000000001
language engineering 0.001185593
natural language 0.0011755239999999998
language processing 0.001174094
feature 0.00116944
discriminative training 0.001164387
hidden conditional 0.0011566340000000001
training algorithms 0.0011465289999999999
spoken language 0.001141456
set slot 0.0011392490000000002
conditional expectation 0.001128338
conditional random 0.0011223750000000001
perceptron training 0.001118268
training example 0.0011019979999999999
test data 0.001083798
slot error 0.001078491
other optimization 0.001074214
training iterations 0.0010514
training capability 0.001049283
training performances 0.001049283
first set 9.97088E-4
words 9.95986E-4
annotated data 9.87129E-4
accuracy results 9.84349E-4
slot state 9.8025E-4
probability objective 9.768910000000001E-4
labeled data 9.6894E-4
atis data 9.67928E-4
time slot 9.618720000000001E-4
objective function 9.615520000000001E-4
same state 9.59454E-4
generative models 9.57588E-4
slot name 9.54244E-4
test set 9.50586E-4
data sparseness 9.43071E-4
data requirement 9.37972E-4
language 9.28411E-4
other hand 9.27236E-4
previous slot 9.256520000000001E-4
posterior probability 9.10739E-4
semantic error 8.88513E-4
second slot 8.76342E-4
baseline error 8.758729999999999E-4
state sequence 8.75742E-4
single slot 8.712450000000001E-4
