test data 0.00397428
training data 0.00396159
data set 0.003519608
feature model 0.00336077
data sets 0.003306401
labeled data 0.00328209
target data 0.003254125
wsj data 0.003212101
additional data 0.003143149
data points 0.003139409
development data 0.003117381
media data 0.0030696630000000003
data point 0.003058553
annotated data 0.003049236
ical data 0.003047526
newspaper data 0.003025976
fourth data 0.003025976
model parameters 0.002435643
possible model 0.002432902
current model 0.002338698
different feature 0.002245154
model 0.00194105
learning problem 0.001760485
test sets 0.001755181
nlp test 0.001695495
feature bags 0.001693203
feature bagging 0.001689602
feature bag 0.0016845949999999999
default feature 0.0016845949999999999
weighted learning 0.001668817
learning algorithms 0.001653056
test distributions 0.001633683
adversarial learning 0.001612212
learning algo 0.0015988159999999999
rare features 0.001593471
perceptron learning 0.001570802
test sentences 0.00156629
corresponding features 0.001553039
inative learning 0.001545688
sarial learning 0.001545688
batch learning 0.001545688
predictive features 0.001527054
indicative features 0.001523426
lated features 0.001517161
missing features 0.001512926
ble test 0.001489977
training examples 0.0014861660000000001
word tokens 0.00143035
feature 0.00141972
linear models 0.0013885440000000002
unseen word 0.001340201
other words 0.001331849
robust models 0.0013135500000000001
supervised baseline 0.001302864
same way 0.001300805
target domain 0.001286895
learning 0.00128104
sparse models 0.001263322
features 0.00124808
tagging performance 0.0012363650000000001
domain experiments 0.001233375
new method 0.001205171
different domains 0.001204669
training 0.00119884
other approaches 0.0011962470000000001
different views 0.00113425
different spelling 0.001133238
several target 0.001129138
domain shifts 0.001110956
different bags 0.001098917
different sources 0.00109521
previous work 0.001091053
likely performance 0.001074407
pos tagging 0.0010501220000000001
dependency treebank 0.001027752
predefined set 0.00102648
finite set 0.001025529
other player 0.0010225289999999999
pos tag 0.001009329
enron corpus 9.99039E-4
models 9.95977E-4
same effect 9.92314E-4
following way 9.71146E-4
random adversaries 9.652440000000001E-4
tagging problem 9.64893E-4
good fea 9.62056E-4
random subset 9.61637E-4
standard nlp 9.397019999999999E-4
previous approaches 9.34523E-4
copenhagen dependency 9.23419E-4
several authors 9.16504E-4
target distributions 9.135279999999999E-4
such datasets 9.06263E-4
such shifts 9.04561E-4
optimization problem 9.03137E-4
algorithm 8.95016E-4
random ones 8.87201E-4
new domains 8.72969E-4
target domains 8.7061E-4
unknown words 8.54427E-4
