language model 0.0035079539999999998
training data 0.00332441
smt model 0.00323237
model combination 0.0031925779999999997
model toolkit 0.003153337
base model 0.003090807
component model 0.003064911
single model 0.003064603
final model 0.003051098
weak model 0.00303734
strong model 0.0030261949999999998
model predictions 0.003012298
different data 0.00300305
bayesian model 0.002993876
model averaging 0.00298471
model switching 0.0029841
model averag 0.002983445
model 0.00273035
data set 0.002530652
other models 0.002483485
ing data 0.002466831
parallel data 0.002404677
available data 0.002351476
data sets 0.0023467130000000003
translation system 0.002311048
data sam 0.00224382
smt models 0.00224081
translation table 0.002237399
several models 0.0022238040000000002
machine translation 0.002162654
mixture models 0.002140482
multiple models 0.002130119
base models 0.0020992470000000003
models figure 0.002076075
component models 0.002073351
different feature 0.0020682
ensemble models 0.002064867
tion models 0.002049567
weak models 0.00204578
models mean 0.002045709
diverse models 0.002044369
strong models 0.002034635
sentence training 0.002027365
nent models 0.001991926
outperform models 0.001991926
same training 0.0019482100000000001
translation community 0.001921477
translation engines 0.001913835
training set 0.001906322
training time 0.001815486
different language 0.001806284
different learning 0.0017727429999999998
models 0.00173879
training sets 0.001722383
specific training 0.001711916
rate training 0.001711019
word alignment 0.0016959520000000001
feature function 0.001681082
translation 0.00166062
training instance 0.001658598
fixed training 0.001638686
nal training 0.001603793
cific training 0.00160282
training subsets 0.00160282
feature weights 0.001550452
different systems 0.001461106
different mixture 0.0014303719999999999
learning algorithm 0.001430277
train feature 0.001419811
feature functions 0.001415318
feature sampling 0.001405953
target language 0.001372479
source language 0.001371246
same features 0.001360741
training 0.00135004
different mod 0.0013031759999999999
feature subspace 0.0012971229999999999
different sizes 0.001295296
different options 0.001288696
different baselines 0.001282961
different spans 0.001282961
machine learning 0.001246097
language input 0.001192368
parallel corpus 0.0011862259999999999
corpus input 0.001170683
language pairs 0.0011610330000000001
same set 0.0011544519999999998
smt system 0.001152448
other mixture 0.001146387
function values 0.001133548
system combination 0.001112656
language pair 0.001109268
europarl corpus 0.001106917
learning methods 0.0011059989999999999
test sets 0.001101576
same smt 0.00110019
language output 0.0010879729999999999
baseline bleu 0.0010866830000000002
domain adaptation 0.0010838599999999999
ensemble learning 0.00107014
