translation evaluation 0.00390819
human translation 0.00366888
human evaluation 0.00320427
reference translation 0.002949541
translation adequacy 0.002733051
machine translation 0.0027265379999999997
bleu evaluation 0.0026252899999999997
translation variation 0.0026206479999999997
evaluation metric 0.002602394
translation quality 0.0025938899999999997
evaluation scores 0.002551252
clear translation 0.0025084729999999998
right translation 0.002497232
man translation 0.002485767
adequate translation 0.002475962
ence translation 0.002474943
legitimate translation 0.0024726149999999996
inadequate translation 0.0024656609999999996
translation equivalents 0.0024649349999999997
translation ade 0.002464338
translation equivalent 0.002463969
onymic translation 0.002461611
automatic evaluation 0.002403897
human scores 0.002311942
weighted evaluation 0.0022934699999999997
same word 0.002283328
human reference 0.0022456209999999997
evaluation measures 0.0022316
fluency evaluation 0.002226384
human translations 0.002202012
evaluation corpus 0.002201472
evaluation results 0.002199132
translation 0.0021864
evaluation method 0.002139679
evaluation set 0.0021072499999999997
evaluation methods 0.002077774
evaluation unit 0.002030398
evaluation tools 0.002023811
matic evaluation 0.002002343
evaluation blue 0.0019965399999999998
evaluation toolkit 0.0019965399999999998
matched word 0.0018950059999999999
human judgments 0.001858672
human references 0.001825695
human evalua 0.00180183
human perspective 0.001771508
human judges 0.001770976
human judg 0.0017679129999999999
human judgements 0.0017655
correlation scores 0.0017626970000000001
human evaluators 0.001758622
bleu scores 0.001732962
evaluation 0.00172179
other scores 0.0016138300000000001
weighted model 0.00158661
same scores 0.00153468
reference translations 0.0014826729999999999
linguistic model 0.0014062200000000001
weighted scores 0.001401142
semantic issues 0.001379048
adequacy scores 0.001376113
function words 0.001350175
output sentence 0.001341732
same weights 0.00133625
candide system 0.001335953
system candide 0.001335953
fluency scores 0.0013340560000000001
standard bleu 0.001323629
bleu method 0.001321389
positive correlation 0.0013196800000000002
mean scores 0.00130558
bleu approach 0.00129172
model outper 0.001291464
explicit model 0.001291464
high correlation 0.001286835
system systran 0.00128116
correlation coefficient 0.001270809
significant correlation 0.001255251
source text 0.001252623
precision scores 0.001249979
bleu tool 0.001249512
correlation figures 0.001226989
candide sentence 0.0012253
english translations 0.0012244410000000002
baseline bleu 0.001209559
corresponding words 0.001205862
weighted matches 0.0011955849999999999
multiple reference 0.0011912889999999999
news texts 0.001187065
single reference 0.001150869
reference transla 0.001150052
salience scores 0.0011490699999999999
reference set 0.001148601
multiple translations 0.00114768
respective score 0.00114502
french sentence 0.001138991
lation scores 0.001134676
french news 0.001129311
certain words 0.001122758
other factors 0.001121145
