different data 0.0023272600000000003
training data 0.002043858
events data 0.001947512
data test 0.0019434410000000002
new data 0.0019104030000000002
other word 0.0017487460000000002
tense information 0.001742195
data sets 0.00173265
word frequency 0.001714185
experimental data 0.001657478
data maxent 0.001626712
linguistic information 0.001623013
data train 0.001619891
newswire data 0.001594286
webblog data 0.001587949
data subsets 0.001560752
same information 0.001552636
blog data 0.001550544
data genre 0.001546525
noisy data 0.001546525
weblog data 0.001546525
data sparsity 0.001546525
new corpus 0.001530354
different error 0.001529271
previous word 0.001481315
tense annotation 0.001454469
manual word 0.00139602
next word 0.001395278
single word 0.001388441
other events 0.0013385580000000001
word sequences 0.0013377520000000002
chinese events 0.001336042
example word 0.0013323080000000001
learning method 0.0013285850000000002
parallel corpus 0.001316949
word alignments 0.001301375
different types 0.001298224
training set 0.001295451
word embedding 0.0012923020000000002
modality information 0.001291876
word embeddings 0.0012902010000000002
other features 0.001288385
linguistic features 0.0012804679999999999
events test 0.0012762329999999999
different subsets 0.001273292
annotation method 0.001267486
different inter 0.00126237
event pos 0.001251408
guistic information 0.001246668
annotated corpus 0.0012368330000000001
multiple words 0.001224052
lexical features 0.00122017
chinese sentence 0.00120864
explicit information 0.001203229
information extraction 0.001200837
test set 0.0011950340000000002
new features 0.0011930220000000002
chinese text 0.0011895999999999999
tense label 0.001189158
entire corpus 0.001185953
tense inference 0.001184458
statistical learning 0.0011819180000000001
information help 0.001173742
machine learning 0.0011720530000000002
gigaword corpus 0.001169411
learning model 0.001166631
baseline results 0.001166008
tense sense 0.001165807
training sets 0.001161788
such feature 0.0011453729999999999
tense pair 0.00111953
learning experiments 0.001118967
learning models 0.001112883
english events 0.001111174
human annotation 0.001107823
chinese treebank 0.001100101
particular frequency 0.001096597
chinese side 0.001096399
baseline method 0.001093232
classification accuracy 0.0010867300000000002
tense labels 0.001082217
pos tag 0.001081726
other cases 0.001073982
learning algorithm 0.001064411
tense cat 0.001061945
training dataset 0.001058968
results analysis 0.001058795
other categories 0.001054555
tense prediction 0.001050243
experimental results 0.001045582
ent results 0.001036018
first approach 0.0010321800000000002
frequency distribution 0.001029416
chinese noun 0.00102695
learning framework 0.001022225
similar approach 0.001020288
learning rate 0.001018495
strong results 0.001018463
baseline accuracy 0.001018176
cal learning 0.001016746
