word model 0.00407971
topic model 0.0038892600000000003
context word 0.00330363
topic models 0.002754303
topic distribution 0.002750947
word space 0.002646918
lda model 0.002599479
target word 0.002587425
word vectors 0.002570226
argument word 0.002508883
topic space 0.002456468
model setting 0.002452367
large word 0.0024407409999999997
word window 0.002423962
lda topic 0.002392319
particular word 0.002384426
topic vectors 0.002379776
inference topic 0.0023797560000000002
distributional model 0.002376974
our model 0.0023746419999999997
original word 0.002357419
latent topic 0.0023492160000000003
level model 0.00234446
topic distributions 0.0023441340000000003
different similarity 0.00233659
word level 0.00232775
model construction 0.002313064
sensitive model 0.002311733
word lemma 0.0023112329999999998
ment word 0.00230447
own model 0.002302017
model evaluations 0.002288778
comparative model 0.002288778
word windows 0.0022748919999999997
similarity models 0.002218153
specific topic 0.002194562
context models 0.002185383
topic probabilities 0.002158866
vector similarity 0.002156847
topic relevance 0.002097824
other words 0.002092382
topic bias 0.002091679
illustrated topic 0.002081445
model 0.00204821
semantic similarity 0.002037481
rule score 0.001945319
lexical similarity 0.0019346629999999999
inference rule 0.001933386
rule scores 0.0019116749999999998
similarity score 0.0018555389999999998
topic 0.00184105
similarity scores 0.001821895
rule set 0.001806428
argument words 0.001795533
similarity measure 0.0017786219999999999
rule application 0.001748342
particular rule 0.0017476059999999999
candidate rule 0.001733388
rule learning 0.0017264449999999998
similarity methods 0.0016964559999999998
cosine similarity 0.001687942
template rule 0.001685363
correct rule 0.0016826159999999998
rule reliability 0.001678438
rule applications 0.0016774849999999998
valid rule 0.001672583
incorrect rule 0.001667119
rule side 0.001660262
sensitive rule 0.001658203
rule resource 0.0016549339999999998
above rule 0.001653776
similarity paradigm 0.0016471709999999998
rule sides 0.001643166
context modeling 0.0016384519999999999
rule appli 0.001637848
lin similarity 0.001636928
ence rule 0.001635143
rule applica 0.001635143
rule inferences 0.001635143
distributional similarity 0.001633664
original similarity 0.001630819
similarity scheme 0.0016242269999999998
current context 0.001613825
translation probability 0.0016043630000000001
level similarity 0.00160115
similarity measures 0.001599601
context representation 0.001597913
base similarity 0.001592194
ment words 0.00159112
words calbiochem 0.001583292
sensitive similarity 0.0015684229999999998
stop words 0.001567031
rare words 0.001562005
prominent words 0.001559783
similarity mea 0.001552646
directional similarity 0.0015517069999999998
tributional similarity 0.0015513999999999999
context sensitivity 0.001549566
different scores 0.0015486850000000002
sure similarity 0.001545915
