Document-Level Relation Extraction with Adaptive Focal Loss and Knowledge Distillation

Qingyu Tan, Ruidan He, Lidong Bing, Hwee Tou Ng


Abstract
Document-level Relation Extraction (DocRE) is a more challenging task compared to its sentence-level counterpart. It aims to extract relations from multiple sentences at once. In this paper, we propose a semi-supervised framework for DocRE with three novel components. Firstly, we use an axial attention module for learning the interdependency among entity-pairs, which improves the performance on two-hop relations. Secondly, we propose an adaptive focal loss to tackle the class imbalance problem of DocRE. Lastly, we use knowledge distillation to overcome the differences between human annotated data and distantly supervised data. We conducted experiments on two DocRE datasets. Our model consistently outperforms strong baselines and its performance exceeds the previous SOTA by 1.36 F1 and 1.46 Ign_F1 score on the DocRED leaderboard.
Anthology ID:
2022.findings-acl.132
Volume:
Findings of the Association for Computational Linguistics: ACL 2022
Month:
May
Year:
2022
Address:
Dublin, Ireland
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1672–1681
Language:
URL:
https://aclanthology.org/2022.findings-acl.132
DOI:
10.18653/v1/2022.findings-acl.132
Bibkey:
Cite (ACL):
Qingyu Tan, Ruidan He, Lidong Bing, and Hwee Tou Ng. 2022. Document-Level Relation Extraction with Adaptive Focal Loss and Knowledge Distillation. In Findings of the Association for Computational Linguistics: ACL 2022, pages 1672–1681, Dublin, Ireland. Association for Computational Linguistics.
Cite (Informal):
Document-Level Relation Extraction with Adaptive Focal Loss and Knowledge Distillation (Tan et al., Findings 2022)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingestion-script-update/2022.findings-acl.132.pdf
Software:
 2022.findings-acl.132.software.zip
Code
 tonytan48/kd-docre
Data
DocREDRe-DocRED