Interpretability Rules: Jointly Bootstrapping a Neural Relation Extractorwith an Explanation Decoder
Abstract
We introduce a method that transforms a rule-based relation extraction (RE) classifier into a neural one such that both interpretability and performance are achieved. Our approach jointly trains a RE classifier with a decoder that generates explanations for these extractions, using as sole supervision a set of rules that match these relations. Our evaluation on the TACRED dataset shows that our neural RE classifier outperforms the rule-based one we started from by 9 F1 points; our decoder generates explanations with a high BLEU score of over 90%; and, the joint learning improves the performance of both the classifier and decoder.- Anthology ID:
- 2021.trustnlp-1.1
- Volume:
- Proceedings of the First Workshop on Trustworthy Natural Language Processing
- Month:
- June
- Year:
- 2021
- Address:
- Online
- Venue:
- TrustNLP
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 1–7
- Language:
- URL:
- https://aclanthology.org/2021.trustnlp-1.1
- DOI:
- 10.18653/v1/2021.trustnlp-1.1
- Cite (ACL):
- Zheng Tang and Mihai Surdeanu. 2021. Interpretability Rules: Jointly Bootstrapping a Neural Relation Extractorwith an Explanation Decoder. In Proceedings of the First Workshop on Trustworthy Natural Language Processing, pages 1–7, Online. Association for Computational Linguistics.
- Cite (Informal):
- Interpretability Rules: Jointly Bootstrapping a Neural Relation Extractorwith an Explanation Decoder (Tang & Surdeanu, TrustNLP 2021)
- PDF:
- https://preview.aclanthology.org/auto-file-uploads/2021.trustnlp-1.1.pdf