word context 0.00456461
word corpus 0.0039790469999999994
cluster word 0.003706646
large word 0.0036792319999999997
word type 0.0036648559999999998
target word 0.0036576259999999998
word pairs 0.0036532739999999998
word window 0.0036312709999999997
single word 0.0036247329999999998
substitute word 0.0036213359999999997
word mean 0.00360527
word vocabulary 0.0035949899999999997
word types 0.003586821
second word 0.0035862529999999998
word con 0.0035860939999999997
individual word 0.003552698
left word 0.0035457559999999997
word contexts 0.003545253
word tokens 0.003544739
right word 0.00354177
corresponding word 0.003525731
word assumption 0.00352295
word penn 0.003522248
word classes 0.003519632
gle word 0.003516942
lion word 0.003509486
word instance 0.003506065
word position 0.003501891
word occurrences 0.003495635
adjacent word 0.003495115
word posi 0.003495115
word identity 0.003495115
model training 0.002802604
language model 0.002784168
baseline model 0.002418715
computational model 0.00230688
such words 0.002268135
hmm model 0.002234448
paradigmatic model 0.002205687
brown model 0.002204803
bigram model 0.002184356
markov model 0.002167939
mixture model 0.002164207
syntagmatic model 0.002156991
based model 0.002140084
guage model 0.002137682
model max 0.002133099
semantic context 0.00211426
context vector 0.002112413
individual words 0.002032378
right words 0.00202145
initial words 0.001994654
tute words 0.001993926
infrequent words 0.001978667
neighboring words 0.001978006
lowercase words 0.001975711
italized words 0.001975711
capitalized words 0.001975711
unsubstitutable words 0.001975711
context similarity 0.001964523
context information 0.00192882
model 0.00191592
context vectors 0.001888836
training data 0.001856458
words 0.00176195
context representations 0.0017298250000000002
context representation 0.001685693
context types 0.001586891
paradigmatic context 0.001572107
discrete context 0.001547484
context statistics 0.001547016
left context 0.0015458260000000001
right context 0.00154184
alignment features 0.001508395
training algorithm 0.001447426
test data 0.001395073
data set 0.00139226
similar training 0.001386757
distributional models 0.001357673
input features 0.00134056
paradigmatic models 0.001318757
context 0.00128234
models mto 0.001271755
vector space 0.001271477
text vector 0.001268228
paradigmatic features 0.001263314
additional features 0.001253475
morphological features 0.001250188
bilistic models 0.00124551
learning algorithm 0.001232358
training settings 0.001224325
data embedding 0.001222642
labeled data 0.001217755
features mto 0.001216312
journal data 0.0012155669999999999
data sparsity 0.00121454
syntactic information 0.001214231
orthographic features 0.001213642
occurrence data 0.00120722
logical features 0.001191578
