word model 0.004176930000000001
model training 0.0036016499999999996
target word 0.0033745600000000004
source word 0.0031326800000000005
language model 0.002958759
training data 0.00287506
generative model 0.002773706
translation features 0.00276023
transliteration training 0.00275927
english word 0.00274617
model weights 0.002580419
word list 0.002556081
linear model 0.0025557369999999998
model linear 0.0025557369999999998
translation system 0.002528911
new word 0.0025192440000000003
hybrid model 0.002490882
unigram model 0.002438432
channel model 0.002429087
own model 0.0024237349999999998
model quality 0.002411741
model size 0.002395286
gram model 0.002391923
word unigram 0.002380622
original word 0.0023729370000000003
model construction 0.002371467
japanese word 0.0023652
katakana word 0.002327261
independent word 0.002320819
word tmost 0.0023195050000000004
word prefixes 0.0023165290000000003
word uni 0.002313653
other translation 0.00230408
english translation 0.00227619
training corpus 0.0022630059999999997
same training 0.002215743
translation pairs 0.002214996
transliteration system 0.002214321
translation evaluation 0.002210753
translation test 0.002156441
target language 0.002156389
training set 0.0021445889999999997
same data 0.0021222430000000002
model 0.00211737
training pairs 0.002109696
target words 0.002100031
discriminative training 0.002087054
reference translation 0.00204711
training method 0.0020452689999999997
wikipedia training 0.0020424989999999997
translation table 0.002039397
target character 0.002036785
feature set 0.002029609
target context 0.002009707
training pair 0.002003525
translation scores 0.001996775
transliteration probability 0.001977458
machine translation 0.0019743539999999998
generative target 0.0019713359999999997
target lexicon 0.00197065
test data 0.001957641
wikipedia data 0.001948999
new training 0.0019439639999999998
translation modeling 0.0019425039999999998
relative translation 0.001942268
target pairs 0.001940416
training derivations 0.0019325319999999998
smt features 0.001917035
transliteration pairs 0.0019004059999999999
perceptron training 0.001889069
translation quality 0.001883951
available training 0.0018785899999999999
bleu training 0.001874892
context features 0.0018653570000000002
source words 0.0018581510000000002
perfect translation 0.001852994
translation requests 0.00184601
augmented translation 0.001845415
translation tables 0.0018435889999999999
tical translation 0.001843012
translation ser 0.001842339
transliteration test 0.0018418509999999998
transliteration task 0.001837628
generative features 0.001826986
lexicon features 0.0018263
tive training 0.0018233149999999998
training examples 0.00181333
same source 0.0018045830000000002
input training 0.0017748549999999999
training example 0.001769168
source context 0.001767827
feature count 0.001756084
training transliter 0.001752838
feature types 0.00174938
feature space 0.00174631
ceptron training 0.001744584
modeling data 0.001743704
tron training 0.001736987
overall feature 0.001734327
transliteration table 0.0017248069999999999
