model performance 0.002572506
model approach 0.002419116
single model 0.002401814
selection model 0.0023620539999999997
initial model 0.0022778679999999997
product model 0.00222405
selector model 0.0022179599999999997
selecting model 0.002215446
weak model 0.002207132
linear model 0.00220663
different models 0.00220441
later model 0.002203725
ensemble model 0.0021968919999999998
lar model 0.0021655669999999998
model perf 0.002165295
training data 0.001998079
model 0.00192203
other models 0.001900689
human language 0.001788102
many models 0.0017042529999999998
labeled data 0.00163152
selection models 0.001622824
various models 0.001573768
language technologies 0.001565732
tion models 0.0015347429999999999
training set 0.001531363
multiple models 0.001528087
individual models 0.001513504
selector models 0.00147873
base models 0.001472573
weak models 0.0014679019999999999
linear models 0.0014673999999999998
later models 0.001464495
ensemble models 0.001457662
perceptron models 0.0014326319999999999
semble models 0.00143017
feature set 0.0014285510000000001
ble models 0.001426454
perceptrons models 0.001424781
lection models 0.001424781
diverse models 0.001424781
data points 0.001418669
lected data 0.0014005929999999999
parse selection 0.00136922
test set 0.001347462
language 0.00132116
labeled training 0.001311479
different experiment 0.0012945790000000001
annotation cost 0.0012653949999999999
different selectors 0.001265138
machine learning 0.001210887
modeling parse 0.001210684
annotated training 0.001189053
available training 0.001187175
models 0.0011828
preferred parse 0.001180758
training material 0.001171684
parse selec 0.001170928
parse forest 0.001170928
learning algorithm 0.001125689
partial information 0.001110117
feature vector 0.001098528
selection performance 0.0010904999999999999
reusing training 0.00108051
full cost 0.001075266
main feature 0.001065908
information gain 0.001060377
complete learning 0.001058308
total cost 0.0010456789999999999
cost reduction 0.001042741
learning curves 0.001037291
feature sets 0.001027559
selection accuracy 0.001025325
performance level 0.001023256
active learning 0.0010065439999999998
other strategies 0.001003014
large number 9.98652E-4
learning studies 9.98327E-4
random sampling 9.97883E-4
sentence subset 9.97184E-4
tree entropy 9.96889E-4
mrs feature 9.95191E-4
unit cost 9.91966E-4
variable cost 9.917139999999999E-4
entire set 9.89654E-4
learning literature 9.88007E-4
phrase structure 9.85601E-4
cost selector 9.853190000000001E-4
seed sentence 9.84409E-4
labeling corpus 9.82349E-4
conglomerate feature 9.815779999999999E-4
tinct feature 9.78959E-4
performance levels 9.722789999999999E-4
sampling figure 9.67263E-4
good results 9.67071E-4
cost gap 9.598709999999999E-4
test split 9.5966E-4
such examples 9.567029999999999E-4
words 9.56536E-4
random selection 9.5376E-4
