word vector 0.002457129
word vectors 0.002292385
other word 0.0020914460000000003
good word 0.0020035030000000002
word embedding 0.001976839
word vec 0.001966834
word represen 0.001880436
syntactic features 0.001662039
hidden layer 0.001569038
network training 0.001547469
output layer 0.0015434630000000001
vector representation 0.001536997
different time 0.0015028889999999999
detection features 0.001494254
continuous features 0.0014866319999999999
common words 0.0014779849999999998
feature set 0.0014419699999999999
different representation 0.001438265
neural network 0.001422104
latent features 0.001415337
input layer 0.001356211
neural models 0.001314418
vector representations 0.0013132970000000002
output vectors 0.001299816
input vector 0.001277308
sequential data 0.001266861
same time 0.001265799
representation space 0.00125943
previous layer 0.001235061
input representation 0.001206747
next layer 0.001202826
single layer 0.001202404
training methods 0.001199377
hidden representations 0.001195874
features 0.00119335
time step 0.001173321
neural networks 0.001142638
such models 0.001138937
high time 0.001133374
words 0.00112825
final layer 0.001121904
crf model 0.001117957
appropriate feature 0.001113502
supervised training 0.001113311
input vectors 0.001112564
layer decision 0.001112077
memory representation 0.001108279
den layer 0.001105762
layer computations 0.001099827
network size 0.001097179
recurrent neural 0.001095232
semicrf model 0.001094943
fective training 0.001064943
deep learning 0.001063336
model precision 0.001055266
final model 0.001054442
vector rep 0.001052229
bias vector 0.001051279
time steps 0.001049151
model selection 0.001042038
such rnn 0.001039351
hidden layers 0.001034658
ence model 0.0010303769999999999
incoming vector 0.001027293
ferent time 0.001019207
time scales 0.001016487
future information 0.00101037
deep rnn 0.001005846
hidden units 0.001003089
same space 9.94169E-4
output weight 9.87356E-4
tional network 9.84534E-4
opinion analysis 9.79536E-4
output units 9.77514E-4
neural net 9.74915E-4
intermediate representation 9.6113E-4
language sentences 9.5926E-4
termediate representation 9.51668E-4
representation learners 9.50239E-4
deep rnns 9.400319999999999E-4
example sentence 9.35038E-4
hidden rep 9.34806E-4
deep networks 9.336520000000001E-4
objective function 9.31781E-4
single sentence 9.24628E-4
softmax function 9.210309999999999E-4
prediction approach 9.193790000000001E-4
tagging scheme 9.15246E-4
nlp tasks 9.15047E-4
tion space 9.09523E-4
output bias 9.08281E-4
bio tagging 9.08131E-4
work opinion 9.078650000000001E-4
nonlinear function 9.07793E-4
sigmoid function 9.05558E-4
final output 9.00003E-4
space recurrent 8.96572E-4
labeling approach 8.95147E-4
appropriate output 8.921720000000001E-4
deeprnn output 8.88821E-4
