context model 0.003469232
model test 0.003122679
learning model 0.003100218
text model 0.003017379
generative model 0.003001013
reranking model 0.002969906
model application 0.002946586
model figure 0.002893702
tical model 0.002884914
model dif 0.002842003
model 0.00257157
word order 0.002094798
syntactic function 0.002032623
feature set 0.001936884
dependency parsing 0.0019192699999999998
data set 0.001849075
syntactic information 0.001836495
data context 0.0017948690000000002
dependency information 0.0017933649999999999
function words 0.001727188
dependency structure 0.001722941
statistical dependency 0.0016965090000000001
correct dependency 0.001637275
lexical dependency 0.001634929
japanese dependency 0.001623376
sentence probability 0.0016103559999999999
training corpus 0.001603464
such feature 0.001596794
dependency analysis 0.001577392
training data 0.001520914
set case 0.001508855
syntactic property 0.001503706
accuracy context 0.001502731
dependency relation 0.00149666
available dependency 0.00149515
dependency relations 0.00149448
dependency analyzer 0.001487915
syntactic constraint 0.0014731289999999999
syntactic properties 0.001470571
parsing method 0.001470131
test data 0.001448316
dependency analyzers 0.001439694
automatic dependency 0.001438886
actual dependency 0.0014379150000000001
dependency rela 0.001424003
sentence accuracy 0.0014006079999999998
accuracy sentence 0.0014006079999999998
particle set 0.001400002
statistical models 0.0013908290000000001
parsing accuracy 0.0013739389999999998
different method 0.001364749
element set 0.001337467
sentences context 0.001327227
content words 0.001317836
evaluation data 0.001308373
conditional probability 0.001294156
elements context 0.0012913640000000001
same case 0.001287052
japanese language 0.00127237
kyodai corpus 0.00127055
japanese sentence 0.001268515
such information 0.001254743
entire set 0.0012462950000000001
example sentence 0.001243197
parsing japanese 0.001241846
various models 0.001233437
development data 0.001231169
accuracy accuracy 0.001210138
input sentence 0.0012094039999999999
posterior context 0.001205041
generation probability 0.001200946
incorrect context 0.001195068
syntactic 0.00119353
statistical information 0.001189074
possible parses 0.001184835
whole data 0.001180658
data equation 0.001178907
same particle 0.001178199
head words 0.00117501
rior context 0.001170368
terior context 0.0011685810000000001
probability estimation 0.001159698
dependency 0.0011504
other pos 0.001132653
lexical information 0.001127494
separate models 0.001127158
variant models 0.001116543
contextual information 0.001107913
rect probability 0.001107052
reranking method 0.001099597
bunsetsu accuracy 0.001092409
occurrence probability 0.001088205
changed probability 0.001085368
whole sentence 0.00107899
sentence accuracies 0.001074904
inputted sentence 0.001067431
pendency parsing 0.0010581429999999999
training sentences 0.001053272
same cases 0.001044257
experimental results 0.001026421
