training sentences 0.002787794
sentence entropy 0.002738015
sentence length 0.00269488
same sentence 0.00255644
labeled sentences 0.0025082
use sentence 0.002467032
similar sentences 0.002388401
sentence distance 0.002376571
sentence clustering 0.002358199
sentence clusters 0.002328836
candidate sentences 0.002311215
long sentence 0.00230518
clusters sentences 0.002288686
long sentences 0.00226503
short sentences 0.002254098
sentence density 0.002246296
annotated sentences 0.002231252
sentence pair 0.002226242
sentence lengths 0.0022261900000000003
unlabeled sentence 0.0022240660000000002
input sentence 0.002219368
unparseable sentence 0.002206486
unannotated sentences 0.002171002
repeated sentences 0.002167157
model entropy 0.001964945
sentence 0.00192084
sample selection 0.0018938409999999998
sentences 0.00188069
selection method 0.001858509
selection number 0.0018315319999999999
new model 0.001729165
random selection 0.001708485
performance training 0.001683459
selection use 0.001664282
word entropy 0.0016579289999999998
training data 0.0015869130000000001
model output 0.001554672
training corpus 0.001530529
current model 0.001526019
initial model 0.001523916
statistical model 0.0015197000000000001
selection process 0.001503241
active training 0.0014547000000000002
dom selection 0.001437972
training set 0.001437256
model parameters 0.001436502
rent model 0.001434532
isting model 0.001433637
selection algorithms 0.0014268319999999998
selection schemes 0.00141339
language models 0.001365307
active learning 0.001355773
other words 0.001340682
previous word 0.0013401939999999998
language parsing 0.001334746
natural language 0.001331917
parse tree 0.0013288710000000001
entropy figure 0.00131912
sample distribution 0.001288773
similar performance 0.001284066
initial training 0.00128325
training events 0.001280101
performance test 0.001268446
next training 0.001249186
basic sample 0.001242344
parse scores 0.001225606
whole training 0.001224316
second word 0.001223373
sample space 0.001207376
possible parse 0.0011938299999999999
learning process 0.001193328
human annotation 0.001185634
parse trees 0.001171478
semantic parse 0.001159896
machine learning 0.001156421
rst word 0.001155798
clustering number 0.001150801
model 0.00114777
clustering algorithm 0.001147143
tive learning 0.001143331
parse actions 0.001141379
change entropy 0.0011405859999999999
same uncertainty 0.001133897
unigram word 0.001131445
word fly 0.001131411
different weighting 0.0011289590000000001
word distributions 0.001128439
learning result 0.001128146
learning iteration 0.001127708
learning curve 0.0011213080000000001
language processing 0.001118591
selection 0.00111809
learning algorithms 0.001116919
uncertainty score 0.001110007
learning curves 0.001102759
entropy now 0.0011025499999999999
parse events 0.0011018339999999999
density performance 0.0011018109999999998
sample density 0.0011012069999999999
entire sample 0.001100971
