topic model 0.00417586
language topic 0.0032733099999999998
topic distribution 0.0032369499999999997
model translation 0.00313072
topic probability 0.0029875609999999997
bilingual topic 0.0026638509999999996
same topic 0.0026558859999999997
lda model 0.00253373
generative model 0.0024843060000000004
topic level 0.0024784129999999996
topic similarity 0.002464554
cal model 0.0024625940000000002
topic number 0.0024481599999999996
prior topic 0.002432481
topic con 0.0024205529999999998
topic dis 0.0023993929999999997
particular topic 0.002384819
monolingual topic 0.00238348
topic distri 0.002360494
lingual topic 0.002359296
topic fre 0.0023484969999999997
tent topic 0.002345425
topic mod 0.0023450219999999996
tical topic 0.002342689
topic numbers 0.0023417679999999997
topic discrepancies 0.002340447
different word 0.002300482
language document 0.00220089
document distribution 0.00216453
topic 0.00209052
model 0.00208534
probability distribution 0.002043471
word frequency 0.0018873370000000002
word distribu 0.001838828
same language 0.0017481559999999998
language documents 0.00165708
dirichlet distribution 0.001591433
source language 0.001574288
target language 0.001571398
similar document 0.0015356520000000002
document information 0.001535209
different data 0.001496383
language links 0.001473682
language topi 0.001472482
machine translation 0.001470509
uniform distribution 0.001404282
document top 0.0013997970000000002
richlet distribution 0.001397214
document frequency 0.0013967570000000002
document similarity 0.0013921340000000002
specific document 0.0013894600000000001
document number 0.00137574
comparable document 0.0013587590000000002
document pairs 0.0013343790000000001
bilingual topics 0.001331151
document match 0.001328781
translation accuracy 0.001313266
translation equivalents 0.001300264
document alignment 0.0012983780000000002
translation vocabu 0.0012961230000000002
gual document 0.0012807970000000002
similar topics 0.0012753719999999999
inverse document 0.001268699
analyzed document 0.001268699
different documents 0.001266092
vocabulary probability 0.0012626970000000001
conditional probability 0.001240907
total probability 0.001235281
lda models 0.001218144
language 0.00118279
probability formula 0.001182184
test dataset 0.0011819320000000001
training dataset 0.0011806540000000002
classified probability 0.001149101
distribution 0.00114643
same time 0.001139425
different languages 0.001132683
different lan 0.0011235
topics analysis 0.001122964
other documents 0.00111005
ing data 0.001096908
data source 0.001096079
specific data 0.001075941
different evaluations 0.0010539899999999999
bilingual corpora 0.0010524059999999999
first dataset 0.001052141
second dataset 0.001047785
translation 0.00104538
bilingual lda 0.001021721
document 0.0010181
similar documents 9.91842E-4
new method 9.78587E-4
new documents 9.598289999999999E-4
ticular data 9.562450000000001E-4
training set 9.501710000000001E-4
frequency distributions 9.41185E-4
bilingual extension 9.164479999999999E-4
semantic information 9.09836E-4
similar target 9.0616E-4
other algo 9.00238E-4
