feature set 0.003104219
different feature 0.0029125929999999998
feature weights 0.002877735
feature selection 0.002844529
feature values 0.002842865
feature templates 0.002772433
many feature 0.0027715780000000002
feature weight 0.002768107
incremental feature 0.002767954
feature sets 0.002700459
base feature 0.002687278
fixed feature 0.002662775
features time 0.002639212
only features 0.002638222
feature definition 0.002633409
feature mapping 0.002630042
feature selec 0.002625598
appropriate feature 0.002623465
local features 0.002601652
many features 0.0025779080000000003
additional features 0.002535046
effective features 0.002514415
global features 0.002481358
informative features 0.002471261
redundant features 0.002440615
irrelevant features 0.002438436
dundant features 0.002433533
feature 0.00237477
sequence model 0.002359261
crf model 0.002225618
linear model 0.002192965
features 0.0021811
language model 0.002170469
other model 0.0021570879999999997
perceptron training 0.0020988079999999998
current model 0.002065715
baseline model 0.002020398
markov model 0.001936638
training set 0.001909329
training data 0.001888973
model sparsity 0.001882641
crf training 0.001870508
model adaptation 0.001850843
perceptron algorithm 0.001844386
global model 0.001835248
word sequence 0.001807945
trigram model 0.001807845
background model 0.001796235
traditional model 0.001787756
guage model 0.001787261
training example 0.001687374
training time 0.001637992
sequence models 0.00160719
incremental training 0.0015730639999999999
training examples 0.001554504
negative training 0.0015406649999999999
model 0.00153499
current word 0.001514399
training procedure 0.001510048
training sample 0.001507923
training iterations 0.0014964919999999999
word segmentation 0.001493679
positive training 0.0014818309999999999
test set 0.0014660979999999999
fast training 0.001445472
linear models 0.0014408939999999999
training samples 0.001432165
language models 0.001418398
same parameter 0.001395116
previous word 0.001393982
chinese word 0.001364735
next word 0.0013640619999999999
word string 0.001359898
optimization algorithm 0.001332221
standard algorithm 0.001332083
conditional sequence 0.001325707
linear function 0.001317965
parameter estimation 0.001309835
word seg 0.0012522479999999999
word suffixes 0.001232474
test results 0.001205609
local models 0.001203471
blasso algorithm 0.001193422
same time 0.0011826010000000001
set accuracy 0.001180144
training 0.00117988
tag sequence 0.001178112
averaged perceptron 0.001177418
estimation method 0.001176524
fslr algorithm 0.001173986
pos tagging 0.001172914
eraged perceptron 0.001169182
aged perceptron 0.0011676809999999998
parameter vector 0.0011425370000000001
probabilistic sequence 0.001139262
parameter values 0.001138722
sparse models 0.001131458
sequence classifier 0.001130526
trained models 0.001129565
estimation algorithms 0.001120594
