model pos 0.0041253999999999996
character word 0.003456678
word sequence 0.0031887309999999998
word segmentation 0.003158799
chinese word 0.003101578
possible word 0.0030675109999999998
current word 0.003001074
perceptron model 0.002998904
linear model 0.002991086
word language 0.00297338
word count 0.0029135569999999998
model language 0.00287684
language model 0.00287684
pos tag 0.0028494
order word 0.0027800869999999997
word seg 0.002776406
discriminative model 0.0027586159999999998
entropy model 0.002741156
markov model 0.0027311329999999997
gram word 0.002729347
word unigrams 0.002726022
labelling model 0.002703236
generating model 0.00265295
cascaded model 0.0026398719999999997
ceptron model 0.002636776
guage model 0.0026309569999999997
model func 0.002629476
occurrence model 0.002629476
model map 0.002629476
ear model 0.002629476
pos sequence 0.002477651
pos tags 0.002447178
pos tagging 0.002405783
model 0.00236997
pos information 0.002353576
other features 0.002336991
pos language 0.0022623
corresponding pos 0.002184366
pos part 0.002167872
such features 0.002140554
pos lan 0.0020187
additional features 0.0019080829999999999
tag sequence 0.001816191
separate features 0.001814505
useful features 0.0018139719999999999
features com 0.001809535
feature space 0.0017366880000000001
feature vector 0.00172283
character sequence 0.001712389
training corpus 0.0016963109999999998
training set 0.0016318399999999999
chinese character 0.001625236
joint segmentation 0.00157565
feature templates 0.0015732110000000001
training method 0.001571806
perceptron training 0.0015706700000000001
training algorithm 0.001566721
important feature 0.001554618
features 0.00154716
current character 0.001524732
chinese joint 0.001518429
input character 0.0014864040000000002
boundary tag 0.0014391220000000001
sequence segmentation 0.00141451
feature vectorφ 0.001414391
test set 0.001361597
training example 0.001357849
ing corpus 0.0013463809999999998
test results 0.001303986
training step 0.001292141
second training 0.001287457
training procedure 0.001270573
training examples 0.00126692
third character 0.001254639
perceptron algorithm 0.001253919
different knowledge 0.001239198
whole training 0.001228181
training iterations 0.001224873
ing algorithm 0.001216791
tagging problem 0.001216345
tive training 0.001205963
perceptron learning 0.001183716
parameter vector 0.001181467
segmentation result 0.001160199
decoding algorithm 0.0011575169999999998
same approach 0.001153794
feature 0.00115346
vector space 0.001152598
candidate results 0.001149194
similar approach 0.001127313
tagging result 0.001118263
language models 0.0011149229999999999
programming approach 0.0010962139999999999
university corpus 0.001095167
research corpus 0.001093156
different tem 0.001092718
different knowl 0.001088099
pku corpus 0.001085772
different kinds 0.001080412
raw corpus 0.001079991
