Yiqun Wang
2026
Do Large Language Models Grasp the Grammar? Evidence from Grammar-Book-Guided Probing in Luxembourgish
Lujun LI | Yewei Song | Lama Sleem | Yiqun Wang | Yangjie Xu | Cedric LOTHRITZ | Niccolo' Gentile | Radu State | Tegawendé F. Bissyandé | Jacques Klein
Proceedings of the Fifteenth Language Resources and Evaluation Conference
Lujun LI | Yewei Song | Lama Sleem | Yiqun Wang | Yangjie Xu | Cedric LOTHRITZ | Niccolo' Gentile | Radu State | Tegawendé F. Bissyandé | Jacques Klein
Proceedings of the Fifteenth Language Resources and Evaluation Conference
Grammar refers to the system of rules that governs the structural organization and the semantic relations among linguistic units such as sentences, phrases, and words within a given language. In natural language processing, there remains a notable scarcity of grammar-focused evaluation protocols, a gap that is even more pronounced for low-resource languages. Moreover, the extent to which large language models genuinely comprehend grammatical structure, especially the mapping between syntactic structures and meanings remains under debate. To investigate this issue, we propose a Grammar-Book–Guided evaluation pipeline intended to provide a systematic and generalizable framework for grammar evaluation consisting of four key stages, and in this work we take Luxembourgish as a case study. The results show a weak positive correlation between translation performance and grammatical understanding, indicating that strong translations do not necessarily imply deep grammatical competence. Larger models perform well overall due to their semantic strength but remain weak in morphology and syntax, struggling particularly with Minimal Pair tasks, while strong reasoning ability offers a promising way to enhance their grammatical understanding.
2025
Tracing and Dissecting How LLMs Recall Factual Knowledge for Real World Questions
Yiqun Wang | Chaoqun Wan | Sile Hu | Yonggang Zhang | Xiang Tian | Yaowu Chen | Xu Shen | Jieping Ye
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Yiqun Wang | Chaoqun Wan | Sile Hu | Yonggang Zhang | Xiang Tian | Yaowu Chen | Xu Shen | Jieping Ye
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Recent advancements in large language models (LLMs) have shown promising ability to perform commonsense reasoning, bringing machines closer to human-like understanding. However, deciphering the internal reasoning processes of LLMs remains challenging due to the complex interdependencies among generated tokens, especially in practical question-answering. In this study, we introduce a two-dimensional analysis framework—comprising token back-tracing and individual token decoding—to uncover how LLMs conduct factual knowledge recall. Through explanatory analysis of three typical reasoning datasets, we identify a consistent three-phase pattern: Subject Augmentation and Broadcasting, Object Retrieval and Reranking, and Conclusion Fusion and Generation. Our findings reveal that LLMs do not lack relevant knowledge but struggle to select the most accurate information based on context during the retrieval and rerank phase. Leveraging these findings, we apply representation engineering and selective fine-tuning to target specific modules responsible for retrieval and rerank errors. Experimental results show large improvements in response accuracy for both in-domain and out-of-domain settings, validating the rationality of the interpreting result.