Probing Representations for Document-level Event Extraction

Barry Wang, Xinya Du, Claire Cardie


Abstract
The probing classifiers framework has been employed for interpreting deep neural network models for a variety of natural language processing (NLP) applications. Studies, however, have largely focused on sentencelevel NLP tasks. This work is the first to apply the probing paradigm to representations learned for document-level information extraction (IE). We designed eight embedding probes to analyze surface, semantic, and event-understanding capabilities relevant to document-level event extraction. We apply them to the representations acquired by learning models from three different LLM-based document-level IE approaches on a standard dataset. We found that trained encoders from these models yield embeddings that can modestly improve argument detections and labeling but only slightly enhance event-level tasks, albeit trade-offs in information helpful for coherence and event-type prediction. We further found that encoder models struggle with document length and cross-sentence discourse.
Anthology ID:
2023.findings-emnlp.844
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2023
Month:
December
Year:
2023
Address:
Singapore
Editors:
Houda Bouamor, Juan Pino, Kalika Bali
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
12675–12683
Language:
URL:
https://aclanthology.org/2023.findings-emnlp.844
DOI:
10.18653/v1/2023.findings-emnlp.844
Bibkey:
Cite (ACL):
Barry Wang, Xinya Du, and Claire Cardie. 2023. Probing Representations for Document-level Event Extraction. In Findings of the Association for Computational Linguistics: EMNLP 2023, pages 12675–12683, Singapore. Association for Computational Linguistics.
Cite (Informal):
Probing Representations for Document-level Event Extraction (Wang et al., Findings 2023)
Copy Citation:
PDF:
https://preview.aclanthology.org/nschneid-patch-4/2023.findings-emnlp.844.pdf