LLM Dependency Parsing with In-Context Rules

Michael Ginn, Alexis Palmer


Abstract
We study whether incorporating rules (in various formats) can aid large language models to perform dependency parsing. We consider a paradigm in which LLMs first produce symbolic rules given fully labeled examples, and the rules are then provided in a subsequent call that performs the actual parsing. In addition, we experiment with providing human-created annotation guidelines in-context to the LLMs. We test on eight low-resource languages from Universal Dependencies, finding that while both methods for rule incorporation improve zero-shot performance, the benefit disappears with a few labeled in-context examples.
Anthology ID:
2025.xllm-1.17
Volume:
Proceedings of the 1st Joint Workshop on Large Language Models and Structure Modeling (XLLM 2025)
Month:
August
Year:
2025
Address:
Vienna, Austria
Editors:
Hao Fei, Kewei Tu, Yuhui Zhang, Xiang Hu, Wenjuan Han, Zixia Jia, Zilong Zheng, Yixin Cao, Meishan Zhang, Wei Lu, N. Siddharth, Lilja Øvrelid, Nianwen Xue, Yue Zhang
Venues:
XLLM | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
186–196
Language:
URL:
https://preview.aclanthology.org/landing_page/2025.xllm-1.17/
DOI:
Bibkey:
Cite (ACL):
Michael Ginn and Alexis Palmer. 2025. LLM Dependency Parsing with In-Context Rules. In Proceedings of the 1st Joint Workshop on Large Language Models and Structure Modeling (XLLM 2025), pages 186–196, Vienna, Austria. Association for Computational Linguistics.
Cite (Informal):
LLM Dependency Parsing with In-Context Rules (Ginn & Palmer, XLLM 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/landing_page/2025.xllm-1.17.pdf