Yunji Wang
2025
Understanding the Language Model to Solve the Symbolic Multi-Step Reasoning Problem from the Perspective of Buffer Mechanism
Zhiwei Wang
|
Yunji Wang
|
Zhongwang Zhang
|
Zhangchen Zhou
|
Hui Jin
|
Tianyang Hu
|
Jiacheng Sun
|
Zhenguo Li
|
Yaoyu Zhang
|
Zhi-Qin John Xu
Findings of the Association for Computational Linguistics: EMNLP 2025
Large language models have consistently struggled with complex reasoning tasks, such as mathematical problem-solving. Investigating the internal reasoning mechanisms of these models can help us design better model architectures and training strategies, ultimately enhancing their reasoning capability. In this study, we constructed a symbolic multi-step reasoning task to investigate the information propagation mechanisms in Transformer models when solving the task through direct answering and Chain-of-Thought (CoT) reasoning. We introduced the concept of buffer mechanism: the model stores various information in distinct buffers and selectively extracts it through the query-key matrix. We proposed a random matrix-based algorithm to enhance the model’s reasoning ability. This algorithm introduces only 132 trainable parameters, yet leads to significant performance improvements on 7 multi-step reasoning datasets, including PrOntoQA, LogicAsker, and LogicInference. These findings provide new insights into understanding the large language models.
2022
CML: A Contrastive Meta Learning Method to Estimate Human Label Confidence Scores and Reduce Data Collection Cost
Bo Dong
|
Yiyi Wang
|
Hanbo Sun
|
Yunji Wang
|
Alireza Hashemi
|
Zheng Du
Proceedings of the Fifth Workshop on e-Commerce and NLP (ECNLP 5)
Deep neural network models are especially susceptible to noise in annotated labels. In the real world, annotated data typically contains noise caused by a variety of factors such as task difficulty, annotator experience, and annotator bias. Label quality is critical for label validation tasks; however, correcting for noise by collecting more data is often costly. In this paper, we propose a contrastive meta-learning framework (CML) to address the challenges introduced by noisy annotated data, specifically in the context of natural language processing. CML combines contrastive and meta learning to improve the quality of text feature representations. Meta-learning is also used to generate confidence scores to assess label quality. We demonstrate that a model built on CML-filtered data outperforms a model built on clean data. Furthermore, we perform experiments on deidentified commercial voice assistant datasets and demonstrate that our model outperforms several SOTA approaches.
Search
Fix author
Co-authors
- Bo Dong (董博) 1
- Zheng Du 1
- Alireza Hashemi 1
- Tianyang Hu 1
- Hui Jin 1
- show all...