Internal Chain-of-Thought: Empirical Evidence for Layer‐wise Subtask Scheduling in LLMs

Zhipeng Yang, Junzhuo Li, Siyu Xia, Xuming Hu


Abstract
We show that large language models (LLMs) exhibit an internal chain-of-thought: they sequentially decompose and execute composite tasks layer-by-layer. Two claims ground our study: (i) distinct subtasks are learned at different network depths, and (ii) these subtasks are executed sequentially across layers. On a benchmark of 15 two-step composite tasks, we employ layer-from context-masking and propose a novel cross-task patching method, confirming (i). To examine claim (ii), we apply LogitLens to decode hidden states, revealing a consistent layerwise execution pattern. We further replicate our analysis on the real-world TRACE benchmark, observing the same stepwise dynamics. Together, our results enhance LLMs transparency by showing their capacity to internally plan and execute subtasks (or instructions), opening avenues for fine-grained, instruction-level activation steering.
Anthology ID:
2025.emnlp-main.1147
Volume:
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2025
Address:
Suzhou, China
Editors:
Christos Christodoulopoulos, Tanmoy Chakraborty, Carolyn Rose, Violet Peng
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
22547–22575
Language:
URL:
https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.1147/
DOI:
Bibkey:
Cite (ACL):
Zhipeng Yang, Junzhuo Li, Siyu Xia, and Xuming Hu. 2025. Internal Chain-of-Thought: Empirical Evidence for Layer‐wise Subtask Scheduling in LLMs. In Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing, pages 22547–22575, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
Internal Chain-of-Thought: Empirical Evidence for Layer‐wise Subtask Scheduling in LLMs (Yang et al., EMNLP 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.1147.pdf
Checklist:
 2025.emnlp-main.1147.checklist.pdf