first model 0.0023463340000000003
chain model 0.002101293
feature set 0.002096912
chunking model 0.002073036
novel model 0.002068879
model entities 0.002054037
nal model 0.002054037
pos tag 0.001926357
several features 0.0019157570000000001
training set 0.0018796619999999998
pos tags 0.0018697969999999999
pos accuracy 0.0018216439999999999
feature vector 0.0017956769999999999
model 0.00179115
different pos 0.001784199
pos tagging 0.001729128
token pos 0.001638834
lexicalized features 0.001623683
joint models 0.0016232389999999998
unlexicalized features 0.0016208709999999999
pos labels 0.001608349
pos tagger 0.00159911
pos node 0.001596094
decoding algorithm 0.001588096
binary feature 0.0015784
same word 0.0015741800000000001
feature representation 0.001568861
last pos 0.001564624
pos taggers 0.001539753
pos collapsed 0.0015364250000000001
appropriate feature 0.0015341679999999998
first word 0.0015181539999999999
other words 0.001496775
learning algorithm 0.0014840040000000001
training methods 0.0014778579999999999
inference algorithm 0.001437976
standard training 0.0014097629999999999
viterbi algorithm 0.001405991
test set 0.001377777
joint accuracy 0.00137346
features 0.00135509
baseline models 0.00135489
cky algorithm 0.001308928
relaxed algorithm 0.001299616
joint labeling 0.00129961
training samples 0.00129427
training sample 0.001293945
perceptron weight 0.001275358
vector function 0.0012745830000000001
other table 0.001270736
feature 0.00124673
word tokens 0.0012438039999999998
same time 0.00124049
other systems 0.001231773
labels other 0.001230831
joint label 0.001219618
natural language 0.001208936
exact decoding 0.0011870560000000001
margin method 0.001172111
other hand 0.001171259
such tasks 0.001170194
language processing 0.001169374
structured set 0.001168606
learning method 0.001157037
input sequence 0.001153991
optimization method 0.0011527899999999999
cascaded models 0.001135089
weight vector 0.0011137590000000002
chunk tag 0.001104055
mmvp joint 0.00110281
noun tag 0.0010919389999999999
results com 0.001090587
ural language 0.001085492
chunking models 0.001080739
dynamic conditional 0.001077102
latter models 0.001074389
common method 0.001067829
models aging 0.001061848
former models 0.001061848
same chunk 0.001061478
empirical results 0.00105387
dynamic programming 0.001050786
chunk tags 0.0010474949999999998
such attempt 0.001045329
exact inference 0.001036936
algorithm 0.0010352
input sentence 0.001031405
training 0.00102948
risk function 0.001022722
task data 0.001015738
pendency parsing 9.95735E-4
current chunk 9.94925E-4
decoding algorithms 9.923E-4
shallow parsing 9.89432E-4
cfg parsing 9.89432E-4
erroneous results 9.86468E-4
parse tree 9.8402E-4
first layer 9.8234E-4
original data 9.77598E-4
voted perceptron 9.73406E-4
