Instruction-Tuning LLMs for Event Extraction with Annotation Guidelines

Saurabh Srivastava, Sweta Pati, Ziyu Yao


Abstract
In this work, we study the effect of annotation guidelines–textual descriptions of event types and arguments, when instruction-tuning large language models for event extraction. We conducted a series of experiments with both human-provided and machine-generated guidelines in both full- and low-data settings. Our results demonstrate the promise of annotation guidelines when there is a decent amount of training data and highlight its effectiveness in improving cross-schema generalization and low-frequency event-type performance.
Anthology ID:
2025.findings-acl.677
Volume:
Findings of the Association for Computational Linguistics: ACL 2025
Month:
July
Year:
2025
Address:
Vienna, Austria
Editors:
Wanxiang Che, Joyce Nabende, Ekaterina Shutova, Mohammad Taher Pilehvar
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
13055–13071
Language:
URL:
https://preview.aclanthology.org/display_plenaries/2025.findings-acl.677/
DOI:
Bibkey:
Cite (ACL):
Saurabh Srivastava, Sweta Pati, and Ziyu Yao. 2025. Instruction-Tuning LLMs for Event Extraction with Annotation Guidelines. In Findings of the Association for Computational Linguistics: ACL 2025, pages 13055–13071, Vienna, Austria. Association for Computational Linguistics.
Cite (Informal):
Instruction-Tuning LLMs for Event Extraction with Annotation Guidelines (Srivastava et al., Findings 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/display_plenaries/2025.findings-acl.677.pdf