translation model 0.00375964
language model 0.00360492
training data 0.00285037
model probability 0.002749421
language models 0.002592855
machine translation 0.002442936
model scores 0.002407307
different language 0.0024030989999999997
model score 0.002342892
translation baseline 0.002293476
network language 0.00228947
translation performance 0.002272084
test data 0.00226239
english translation 0.002225269
patent translation 0.002217109
translation task 0.00221452
model abnlm 0.002206288
multilingual translation 0.002098983
translation subtask 0.002098983
translation laboratory 0.002098983
natural language 0.0020746849999999997
language modeling 0.002019744
various language 0.001998956
other words 0.001997217
patent data 0.001995029
language processing 0.001966641
rbm language 0.0019441909999999998
model 0.0019338
development data 0.001930417
data sparseness 0.001894253
translation 0.00182584
frequent words 0.0017487809999999998
current word 0.001675456
unknown words 0.001674409
language 0.00167112
sampling words 0.0016697549999999998
parallel training 0.001647337
word penalty 0.001614322
training sentences 0.001581883
training cslms 0.001564771
training cost 0.001544284
training costs 0.001537323
same corpus 0.001472435
words 0.00138782
output layer 0.0013640129999999999
standard bnlm 0.001312847
probability estimation 0.001288655
neural network 0.001285445
same size 0.001279834
order models 0.001265635
bleu scores 0.00126392
standard smt 0.00126254
training 0.00124661
small corpus 0.001218021
bleu score 0.001199505
results table 0.001157579
input layer 0.001144579
probability distributions 0.001136511
hidden layer 0.001135376
experimental results 0.001122403
smoothing method 0.00111693
different lms 0.001107206
projection layer 0.001105263
statistical machine 0.0011045780000000002
probability mass 0.001099079
probability calculation 0.001092997
discounted probability 0.001091077
percent bleu 0.001063647
sentence development 0.001057187
same way 0.001030153
reranking results 0.0010075589999999999
layer projects 0.001004031
previous work 9.83381E-4
neural networks 9.80557E-4
significance test 9.79339E-4
last approach 9.70175E-4
common approach 9.691260000000001E-4
pass approach 9.66029E-4
large smt 9.65277E-4
sampling test 9.40565E-4
first pass 9.35569E-4
baseline system 9.33503E-4
tokenized test 9.321570000000001E-4
baseline bnlm 9.27764E-4
models 9.21735E-4
large amount 9.00238E-4
whole vocabulary 8.97311E-4
smt system 8.75688E-4
pass decoding 8.64273E-4
future work 8.62366E-4
standard 8.52719E-4
smt decoding 8.50622E-4
scale experiments 8.44317E-4
speech recognition 8.40012E-4
little work 8.33083E-4
second pass 8.2303E-4
first train 8.21406E-4
moses phrase 8.211480000000001E-4
count cutoffs 8.181499999999999E-4
probability 8.15621E-4
