TACRED Revisited: A Thorough Evaluation of the TACRED Relation Extraction Task

Christoph Alt, Aleksandra Gabryszak, Leonhard Hennig


Abstract
TACRED is one of the largest, most widely used crowdsourced datasets in Relation Extraction (RE). But, even with recent advances in unsupervised pre-training and knowledge enhanced neural RE, models still show a high error rate. In this paper, we investigate the questions: Have we reached a performance ceiling or is there still room for improvement? And how do crowd annotations, dataset, and models contribute to this error rate? To answer these questions, we first validate the most challenging 5K examples in the development and test sets using trained annotators. We find that label errors account for 8% absolute F1 test error, and that more than 50% of the examples need to be relabeled. On the relabeled test set the average F1 score of a large baseline model set improves from 62.1 to 70.1. After validation, we analyze misclassifications on the challenging instances, categorize them into linguistically motivated error groups, and verify the resulting error hypotheses on three state-of-the-art RE models. We show that two groups of ambiguous relations are responsible for most of the remaining errors and that models may adopt shallow heuristics on the dataset when entities are not masked.
Anthology ID:
2020.acl-main.142
Volume:
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics
Month:
July
Year:
2020
Address:
Online
Editors:
Dan Jurafsky, Joyce Chai, Natalie Schluter, Joel Tetreault
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1558–1569
Language:
URL:
https://aclanthology.org/2020.acl-main.142
DOI:
10.18653/v1/2020.acl-main.142
Bibkey:
Cite (ACL):
Christoph Alt, Aleksandra Gabryszak, and Leonhard Hennig. 2020. TACRED Revisited: A Thorough Evaluation of the TACRED Relation Extraction Task. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 1558–1569, Online. Association for Computational Linguistics.
Cite (Informal):
TACRED Revisited: A Thorough Evaluation of the TACRED Relation Extraction Task (Alt et al., ACL 2020)
Copy Citation:
PDF:
https://preview.aclanthology.org/nschneid-patch-4/2020.acl-main.142.pdf
Video:
 http://slideslive.com/38928889
Code
 DFKI-NLP/tacrev
Data
TACRED-RevisitedSemEval-2010 Task-8