Abstract
Distant supervision has been a widely used method for neural relation extraction for its convenience of automatically labeling datasets. However, existing works on distantly supervised relation extraction suffer from the low quality of test set, which leads to considerable biased performance evaluation. These biases not only result in unfair evaluations but also mislead the optimization of neural relation extraction. To mitigate this problem, we propose a novel evaluation method named active testing through utilizing both the noisy test set and a few manual annotations. Experiments on a widely used benchmark show that our proposed approach can yield approximately unbiased evaluations for distantly supervised relation extractors.- Anthology ID:
- 2020.findings-emnlp.20
- Volume:
- Findings of the Association for Computational Linguistics: EMNLP 2020
- Month:
- November
- Year:
- 2020
- Address:
- Online
- Editors:
- Trevor Cohn, Yulan He, Yang Liu
- Venue:
- Findings
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 204–211
- Language:
- URL:
- https://aclanthology.org/2020.findings-emnlp.20
- DOI:
- 10.18653/v1/2020.findings-emnlp.20
- Cite (ACL):
- Pengshuai Li, Xinsong Zhang, Weijia Jia, and Wei Zhao. 2020. Active Testing: An Unbiased Evaluation Method for Distantly Supervised Relation Extraction. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 204–211, Online. Association for Computational Linguistics.
- Cite (Informal):
- Active Testing: An Unbiased Evaluation Method for Distantly Supervised Relation Extraction (Li et al., Findings 2020)
- PDF:
- https://preview.aclanthology.org/add_acl24_videos/2020.findings-emnlp.20.pdf