Tianren Zhang


2025

pdf bib
Exploring the Hidden Reasoning Process of Large Language Models by Misleading Them
Guanyu Chen | Peiyang Wang | Yizhou Jiang | Yuqian Liu | Chujie Zhao | Ying Fang | Tianren Zhang | Feng Chen
Findings of the Association for Computational Linguistics: EMNLP 2025

Large language models (LLMs) have been able to perform various forms of reasoning tasks ina wide range of scenarios, but are they truly engaging in task abstraction and rule-based reasoning beyond mere memorization? To answer this question, we propose a novel experimentalapproach, Misleading Fine-Tuning (MisFT), to examine whether LLMs perform abstract reasoning by altering their original understanding of fundamental rules. In particular, by constructing datasets with math expressions or logical formulas that contradict correct principles, we fine-tune the model to learn those contradictory rules and assess its generalization ability on unseen test domains. Through a series of experiments, we find that current LLMs are capable of applying contradictory rules to solve practical math word problems and natural language reasoning tasks, implying the presence of an internal mechanism in LLMs that abstracts before reasoning.