Self-Correction Makes LLMs Better Parsers

Ziyan Zhang, Yang Hou, Chen Gong, Zhenghua Li


Abstract
Large language models (LLMs) have achieved remarkable success across various natural language processing (NLP) tasks. However, recent studies suggest that they still face challenges in performing fundamental NLP tasks essential for deep language understanding, particularly syntactic parsing. In this paper, we conduct an in-depth analysis of LLM parsing capabilities, delving into the underlying causes of why LLMs struggle with this task and the specific shortcomings they exhibit. We find that LLMs may be limited in their ability to fully leverage grammar rules from existing treebanks, restricting their capability to generate syntactic structures. To help LLMs acquire knowledge without additional training, we propose a self-correction method that leverages grammar rules from existing treebanks to guide LLMs in correcting previous errors. Specifically, we automatically detect potential errors and dynamically search for relevant rules, offering hints and examples to guide LLMs in making corrections themselves. Experimental results on three datasets using various LLMs demonstrate that our method significantly improves performance in both in-domain and cross-domain settings.
Anthology ID:
2025.findings-emnlp.357
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2025
Month:
November
Year:
2025
Address:
Suzhou, China
Editors:
Christos Christodoulopoulos, Tanmoy Chakraborty, Carolyn Rose, Violet Peng
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
6749–6762
Language:
URL:
https://preview.aclanthology.org/author-page-yu-wang-polytechnic/2025.findings-emnlp.357/
DOI:
10.18653/v1/2025.findings-emnlp.357
Bibkey:
Cite (ACL):
Ziyan Zhang, Yang Hou, Chen Gong, and Zhenghua Li. 2025. Self-Correction Makes LLMs Better Parsers. In Findings of the Association for Computational Linguistics: EMNLP 2025, pages 6749–6762, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
Self-Correction Makes LLMs Better Parsers (Zhang et al., Findings 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/author-page-yu-wang-polytechnic/2025.findings-emnlp.357.pdf
Checklist:
 2025.findings-emnlp.357.checklist.pdf