semantic model 0.004018600000000001
learning model 0.0032424140000000003
simple model 0.0027614370000000003
model weights 0.002743305
tion model 0.002720866
semantic models 0.002681618
optimal model 0.0026674380000000003
initial model 0.00264277
variable model 0.002629885
supervised model 0.0026291170000000003
model parameters 0.002617738
segmentation model 0.002589212
chain model 0.0025888070000000003
reasonable model 0.002587088
model seman 0.002569114
segmentations model 0.0025577630000000002
butional model 0.002556394
pervised model 0.002549761
tation model 0.0025461140000000004
sdsm model 0.002545302
model biases 0.0025442990000000003
tions model 0.002543665
mentation model 0.002543665
cant model 0.002543665
model 0.00232348
semantic composition 0.002240776
semantic similarity 0.002236028
word vectors 0.002213388
distributional word 0.002207601
word motifs 0.002131132
semantic embeddings 0.002065931
word embeddings 0.002025921
semantic relations 0.0020094040000000002
word tokens 0.00200868
semantic meanings 0.0019972740000000003
semantic mapping 0.001967579
semantic map 0.001961569
semantic simi 0.001949734
arbitrary word 0.0019470870000000001
semantic perspective 0.001940612
semantic repre 0.0019379500000000001
semantic rep 0.00193665
semantic representa 0.0019362840000000001
semantic com 0.001933316
word usage 0.001927406
semantic represen 0.001926184
semantic senses 0.001921041
semantic characteristics 0.001918859
semantic cohesiveness 0.001916587
semantic cohesion 0.001915128
semantic mappings 0.001915128
semantic conno 0.001915128
word shape 0.001876765
word neighbourhoods 0.0018750799999999999
such models 0.0018569379999999998
vector representations 0.001855762
vector composition 0.0016712460000000001
learning motif 0.001619649
representation learning 0.001582311
vector distance 0.0015388560000000001
such data 0.0015260949999999999
human language 0.0015045829999999999
vector tree 0.0015017390000000002
function words 0.001478234
neural language 0.001475933
new language 0.0014601039999999998
recent vector 0.0014452520000000002
motif representations 0.001430887
weight learning 0.0014278490000000001
dimensional vector 0.0014026750000000001
vector repre 0.0013684200000000002
vector compositionality 0.0013645760000000002
language modeling 0.001354667
such motifs 0.001346462
syntactic information 0.001341255
language usage 0.001333106
training data 0.001309923
tional models 0.001297634
learning procedure 0.001285085
distributional representations 0.0012826629999999999
language speaker 0.0012811749999999998
multiplicative models 0.001275455
learning algorithm 0.00127504
learning framework 0.001269081
different representations 0.001265006
distributional motif 0.001253206
segmentation models 0.00125223
motif similarity 0.0012416229999999999
such embeddings 0.001241251
meaning representations 0.0012379539999999999
guage models 0.001224984
supervised learning 0.001224571
butional models 0.001219412
distributional representation 0.0012158680000000002
traditional models 0.0012152
syntactic analysis 0.0012084490000000001
discriminative models 0.0012065029999999998
individual words 0.001192767
learning rate 0.00117398
meaning representation 0.0011711590000000002
