RAAT: Relation-Augmented Attention Transformer for Relation Modeling in Document-Level Event Extraction

Yuan Liang, Zhuoxuan Jiang, Di Yin, Bo Ren


Abstract
In document-level event extraction (DEE) task, event arguments always scatter across sentences (across-sentence issue) and multipleevents may lie in one document (multi-event issue). In this paper, we argue that the relation information of event arguments is of greatsignificance for addressing the above two issues, and propose a new DEE framework which can model the relation dependencies, calledRelation-augmented Document-level Event Extraction (ReDEE). More specifically, this framework features a novel and tailored transformer,named as Relation-augmented Attention Transformer (RAAT). RAAT is scalable to capture multi-scale and multi-amount argument relations. To further leverage relation information, we introduce a separate event relation prediction task and adopt multi-task learning method to explicitly enhance event extraction performance. Extensive experiments demonstrate the effectiveness of the proposed method, which can achieve state-of-the-art performance on two public datasets. Our code is available at https://github.com/TencentYoutuResearch/RAAT.
Anthology ID:
2022.naacl-main.367
Volume:
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
Month:
July
Year:
2022
Address:
Seattle, United States
Editors:
Marine Carpuat, Marie-Catherine de Marneffe, Ivan Vladimir Meza Ruiz
Venue:
NAACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
4985–4997
Language:
URL:
https://aclanthology.org/2022.naacl-main.367
DOI:
10.18653/v1/2022.naacl-main.367
Bibkey:
Cite (ACL):
Yuan Liang, Zhuoxuan Jiang, Di Yin, and Bo Ren. 2022. RAAT: Relation-Augmented Attention Transformer for Relation Modeling in Document-Level Event Extraction. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4985–4997, Seattle, United States. Association for Computational Linguistics.
Cite (Informal):
RAAT: Relation-Augmented Attention Transformer for Relation Modeling in Document-Level Event Extraction (Liang et al., NAACL 2022)
Copy Citation:
PDF:
https://preview.aclanthology.org/naacl24-info/2022.naacl-main.367.pdf
Video:
 https://preview.aclanthology.org/naacl24-info/2022.naacl-main.367.mp4
Code
 TencentYoutuResearch/EventExtraction-RAAT
Data
ChFinAnn