language model 0.003522908
word spacing 0.003386256
korean word 0.003260962
automatic word 0.003252217
word order 0.00324218
incorrect word 0.003218157
true word 0.0032098879999999997
wrong word 0.00317484
word spac 0.0031569849999999997
inaccurate word 0.0031527589999999998
spacing model 0.0031037359999999997
trigram model 0.003041701
good model 0.003022145
bigram model 0.00301593
gram model 0.0030100609999999996
markov model 0.0030006079999999997
entropy model 0.0029941549999999996
basic model 0.0029740789999999997
tic model 0.002927477
current model 0.002924464
dimensional model 0.002892391
unigram model 0.002880116
combined model 0.0028731959999999997
model 0.00261761
data set 0.001919503
context size 0.00187418
specific data 0.0017525000000000002
gram data 0.001751691
context accuracy 0.0017333639999999998
other words 0.0017010150000000002
language models 0.001660168
dimensional data 0.001634021
data sparseness 0.0016236660000000002
data increases 0.001612554
data density 0.001611741
method accuracy 0.0015691339999999998
local context 0.001563599
corpus size 0.001542988
training set 0.001536252
current context 0.001501734
context extension 0.001491783
right context 0.001482428
new method 0.0014812179999999999
optimal context 0.0014674879999999999
context all 0.001462545
left context 0.001453364
context outper 0.0014529529999999999
narrow context 0.001449319
nizing context 0.001447873
expanded context 0.001447873
language sentences 0.001442631
bigram information 0.00139922
content words 0.0013975460000000002
unknown words 0.0013974920000000002
specific information 0.0013941600000000002
large corpus 0.00137034
local information 0.001369619
neighbor words 0.0013648610000000002
korean information 0.001361732
probabilistic method 0.001332961
information processing 0.0013186670000000001
information retrieval 0.001304138
specific language 0.001298558
proposed method 0.001291039
korean language 0.00126613
necessary information 0.001260774
morphemic information 0.0012551580000000001
rean information 0.0012551580000000001
information lack 0.0012551580000000001
training examples 0.001254913
order language 0.0012473480000000001
distance features 0.0012470329999999998
training instance 0.001229348
language processing 0.001223065
language modeling 0.001211026
natural language 0.001210333
previous tags 0.0011996659999999998
context 0.00119488
such languages 0.00119077
strong models 0.001186448
tag sequence 0.001163603
language charac 0.001161913
successful language 0.001159898
gram models 0.001147321
test set 0.001144225
target task 0.001142157
other methods 0.001125203
vector machine 0.001115155
words 0.0011104
similar distribution 0.0011019139999999998
window size 0.001100497
various methods 0.001086435
function changingwindowsize 0.001077042
methods accuracy 0.001073072
previous work 0.0010496379999999999
other languages 0.001047895
vector machines 0.001044438
such cases 0.001042278
simple task 0.00103593
function howsmallshrink 0.001035658
