Separating Retention from Extraction in the Evaluation of End-to-end Relation Extraction
Bruno Taillé, Vincent Guigue, Geoffrey Scoutheeten, Patrick Gallinari
Abstract
State-of-the-art NLP models can adopt shallow heuristics that limit their generalization capability (McCoy et al., 2019). Such heuristics include lexical overlap with the training set in Named-Entity Recognition (Taille et al., 2020) and Event or Type heuristics in Relation Extraction (Rosenman et al., 2020). In the more realistic end-to-end RE setting, we can expect yet another heuristic: the mere retention of training relation triples. In this paper we propose two experiments confirming that retention of known facts is a key factor of performance on standard benchmarks. Furthermore, one experiment suggests that a pipeline model able to use intermediate type representations is less prone to over-rely on retention.- Anthology ID:
- 2021.emnlp-main.816
- Volume:
- Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing
- Month:
- November
- Year:
- 2021
- Address:
- Online and Punta Cana, Dominican Republic
- Editors:
- Marie-Francine Moens, Xuanjing Huang, Lucia Specia, Scott Wen-tau Yih
- Venue:
- EMNLP
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 10438–10449
- Language:
- URL:
- https://aclanthology.org/2021.emnlp-main.816
- DOI:
- 10.18653/v1/2021.emnlp-main.816
- Cite (ACL):
- Bruno Taillé, Vincent Guigue, Geoffrey Scoutheeten, and Patrick Gallinari. 2021. Separating Retention from Extraction in the Evaluation of End-to-end Relation Extraction. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 10438–10449, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
- Cite (Informal):
- Separating Retention from Extraction in the Evaluation of End-to-end Relation Extraction (Taillé et al., EMNLP 2021)
- PDF:
- https://preview.aclanthology.org/nschneid-patch-2/2021.emnlp-main.816.pdf
- Data
- SciERC