Improving Distantly Supervised Document-Level Relation Extraction Through Natural Language Inference

Clara Vania, Grace Lee, Andrea Pierleoni


Abstract
The distant supervision (DS) paradigm has been widely used for relation extraction (RE) to alleviate the need for expensive annotations. However, it suffers from noisy labels, which leads to worse performance than models trained on human-annotated data, even when trained using hundreds of times more data. We present a systematic study on the use of natural language inference (NLI) to improve distantly supervised document-level RE. We apply NLI in three scenarios: (i) as a filter for denoising DS labels, (ii) as a filter for model prediction, and (iii) as a standalone RE model. Our results show that NLI filtering consistently improves performance, reducing the performance gap with a model trained on human-annotated data by 2.3 F1.
Anthology ID:
2022.deeplo-1.2
Volume:
Proceedings of the Third Workshop on Deep Learning for Low-Resource Natural Language Processing
Month:
July
Year:
2022
Address:
Hybrid
Editors:
Colin Cherry, Angela Fan, George Foster, Gholamreza (Reza) Haffari, Shahram Khadivi, Nanyun (Violet) Peng, Xiang Ren, Ehsan Shareghi, Swabha Swayamdipta
Venue:
DeepLo
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
14–20
Language:
URL:
https://aclanthology.org/2022.deeplo-1.2
DOI:
10.18653/v1/2022.deeplo-1.2
Bibkey:
Cite (ACL):
Clara Vania, Grace Lee, and Andrea Pierleoni. 2022. Improving Distantly Supervised Document-Level Relation Extraction Through Natural Language Inference. In Proceedings of the Third Workshop on Deep Learning for Low-Resource Natural Language Processing, pages 14–20, Hybrid. Association for Computational Linguistics.
Cite (Informal):
Improving Distantly Supervised Document-Level Relation Extraction Through Natural Language Inference (Vania et al., DeepLo 2022)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-acl-2023-videos/2022.deeplo-1.2.pdf
Data
DocRED