Active Testing: An Unbiased Evaluation Method for Distantly Supervised Relation Extraction

Pengshuai Li, Xinsong Zhang, Weijia Jia, Wei Zhao


Abstract
Distant supervision has been a widely used method for neural relation extraction for its convenience of automatically labeling datasets. However, existing works on distantly supervised relation extraction suffer from the low quality of test set, which leads to considerable biased performance evaluation. These biases not only result in unfair evaluations but also mislead the optimization of neural relation extraction. To mitigate this problem, we propose a novel evaluation method named active testing through utilizing both the noisy test set and a few manual annotations. Experiments on a widely used benchmark show that our proposed approach can yield approximately unbiased evaluations for distantly supervised relation extractors.
Anthology ID:
2020.findings-emnlp.20
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2020
Month:
November
Year:
2020
Address:
Online
Editors:
Trevor Cohn, Yulan He, Yang Liu
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
204–211
Language:
URL:
https://aclanthology.org/2020.findings-emnlp.20
DOI:
10.18653/v1/2020.findings-emnlp.20
Bibkey:
Cite (ACL):
Pengshuai Li, Xinsong Zhang, Weijia Jia, and Wei Zhao. 2020. Active Testing: An Unbiased Evaluation Method for Distantly Supervised Relation Extraction. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 204–211, Online. Association for Computational Linguistics.
Cite (Informal):
Active Testing: An Unbiased Evaluation Method for Distantly Supervised Relation Extraction (Li et al., Findings 2020)
Copy Citation:
PDF:
https://preview.aclanthology.org/naacl24-info/2020.findings-emnlp.20.pdf
Optional supplementary material:
 2020.findings-emnlp.20.OptionalSupplementaryMaterial.pdf