Cognitive Flow: An LLM-Automated Framework for Quantifying Reasoning Distillation

José Matos, Catarina Silva, Hugo Goncalo Oliveira


Abstract
The ability of large language models (LLMs) to reason effectively is crucial for a wide range of applications, from complex decision-making to scientific research. However, it remains unclear how well reasoning capabilities are transferred or preserved when LLMs undergo Knowledge Distillation (KD), a process that typically reduces model size while attempting to retain performance. In this study, we explore the effects of model distillation on the reasoning abilities of various reasoning language models (RLMs). We introduce Cognitive Flow, a novel framework that systematically extracts meaning and map states in Chain-of-Thought (CoT) processes, offering new insights on model reasoning and enabling quantitative comparisons across RLMs. Using this framework, we investigate the impact of KD on CoTs produced by RLMs. We target DeepSeek-R1-671B and its distilled 70B, 32B and 14B versions, as well as QwenQwQ-32B from the Qwen series. We evaluate the models on three subsets of mathematical reasoning tasks with varying complexity from the MMLU benchmark. Our findings demonstrate that while distillation can effectively replicate a similar reasoning style under specific conditions, it struggles with simpler problems, revealing a significant divergence in the observable thought process and a potential limitation in the transfer of a robust and adaptable problem-solving capability.
Anthology ID:
2025.inlg-main.36
Volume:
Proceedings of the 18th International Natural Language Generation Conference
Month:
October
Year:
2025
Address:
Hanoi, Vietnam
Editors:
Lucie Flek, Shashi Narayan, Lê Hồng Phương, Jiahuan Pei
Venue:
INLG
SIG:
SIGGEN
Publisher:
Association for Computational Linguistics
Note:
Pages:
596–616
Language:
URL:
https://preview.aclanthology.org/ingest-luhme/2025.inlg-main.36/
DOI:
Bibkey:
Cite (ACL):
José Matos, Catarina Silva, and Hugo Goncalo Oliveira. 2025. Cognitive Flow: An LLM-Automated Framework for Quantifying Reasoning Distillation. In Proceedings of the 18th International Natural Language Generation Conference, pages 596–616, Hanoi, Vietnam. Association for Computational Linguistics.
Cite (Informal):
Cognitive Flow: An LLM-Automated Framework for Quantifying Reasoning Distillation (Matos et al., INLG 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-luhme/2025.inlg-main.36.pdf