topic model 0.00275798
word feature 0.00272935
topic features 0.00270695
word sense 0.0026180459999999997
task data 0.00257937
topic feature 0.00237281
training data 0.002178813
words task 0.00215899
topic distribution 0.002154127
other features 0.002078648
english word 0.002053978
test data 0.002033144
testing word 0.002014853
baseline features 0.002008663
unlabeled data 0.001995716
language features 0.00197666
local features 0.0019637170000000002
document model 0.001963085
data size 0.001957701
child word 0.001953422
annotated word 0.00191774
bayesian model 0.001906895
ambiguous word 0.001904288
sample data 0.0018529220000000001
testing data 0.0018384130000000001
labeled data 0.001831935
lda features 0.001805521
topic label 0.001804469
standard features 0.0018022620000000002
bayes model 0.00179515
task training 0.001782583
global features 0.0017790240000000001
training corpus 0.001765333
different corpus 0.001753813
data scarcity 0.0017497530000000002
semcor data 0.00174251
senseval data 0.0017355270000000002
ple data 0.001734628
graphical model 0.00172642
crete data 0.00172463
semantic topic 0.001722574
probabilistic model 0.001704912
collocation features 0.0016552350000000001
topic distributions 0.001626471
original topic 0.001618665
topic dis 0.001607921
unlabeled corpus 0.0015822359999999999
selecting topic 0.001567643
tent topic 0.001549773
anced topic 0.001546956
topic zdn 0.001545685
corpus size 0.001544221
skewed topic 0.001543229
ing corpus 0.001521461
lda feature 0.001471381
task accuracy 0.001469228
sample task 0.001456692
model 0.00145028
wsd task 0.001444898
labeled corpus 0.0014184549999999999
features 0.00139925
context information 0.00138506
target words 0.001366069
ple task 0.001338398
single words 0.0013348269999999998
corpus selection 0.0013345359999999999
corresponding feature 0.001334096
reuters corpus 0.00133265
individual words 0.001329251
semcor corpus 0.00132903
probability distribution 0.001328706
task respec 0.001327214
different pos 0.0013206569999999998
feature construction 0.001319855
clusters words 0.0013195659999999999
newsgroups corpus 0.001310311
topic 0.0013077
ambiguous words 0.001307468
neighboring words 0.0013065569999999999
same document 0.001293639
lda models 0.001271872
correct sense 0.00126523
same approach 0.001228678
sense disambiguation 0.001227022
training process 0.001181816
pos tag 0.0011632259999999998
specific distribution 0.001160298
same accuracy 0.0011584920000000001
posterior distribution 0.001155222
learning algorithm 0.001144705
ing context 0.00114135
same sentence 0.001133098
variational distribution 0.00113239
topical distribution 0.001117434
pos tagging 0.001102533
lda training 0.001097284
other parameters 0.001092142
inference algorithm 0.001088795
corpus 0.00107432
global context 0.001073983
