GRIT: Generative Role-filler Transformers for Document-level Event Entity Extraction

Xinya Du, Alexander Rush, Claire Cardie


Abstract
We revisit the classic problem of document-level role-filler entity extraction (REE) for template filling. We argue that sentence-level approaches are ill-suited to the task and introduce a generative transformer-based encoder-decoder framework (GRIT) that is designed to model context at the document level: it can make extraction decisions across sentence boundaries; is implicitly aware of noun phrase coreference structure, and has the capacity to respect cross-role dependencies in the template structure. We evaluate our approach on the MUC-4 dataset, and show that our model performs substantially better than prior work. We also show that our modeling choices contribute to model performance, e.g., by implicitly capturing linguistic knowledge such as recognizing coreferent entity mentions.
Anthology ID:
2021.eacl-main.52
Volume:
Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume
Month:
April
Year:
2021
Address:
Online
Venue:
EACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
634–644
Language:
URL:
https://aclanthology.org/2021.eacl-main.52
DOI:
10.18653/v1/2021.eacl-main.52
Bibkey:
Cite (ACL):
Xinya Du, Alexander Rush, and Claire Cardie. 2021. GRIT: Generative Role-filler Transformers for Document-level Event Entity Extraction. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 634–644, Online. Association for Computational Linguistics.
Cite (Informal):
GRIT: Generative Role-filler Transformers for Document-level Event Entity Extraction (Du et al., EACL 2021)
Copy Citation:
PDF:
https://preview.aclanthology.org/auto-file-uploads/2021.eacl-main.52.pdf
Code
 xinyadu/doc_event_entity +  additional community code