@inproceedings{xiong-etal-2025-mapping,
    title = "Mapping the Minds of {LLM}s: A Graph-Based Analysis of Reasoning {LLM}s",
    author = "Xiong, Zhen  and
      Cai, Yujun  and
      Li, Zhecheng  and
      Wang, Yiwei",
    editor = "Christodoulopoulos, Christos  and
      Chakraborty, Tanmoy  and
      Rose, Carolyn  and
      Peng, Violet",
    booktitle = "Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing",
    month = nov,
    year = "2025",
    address = "Suzhou, China",
    publisher = "Association for Computational Linguistics",
    url = "https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.896/",
    pages = "17762--17774",
    ISBN = "979-8-89176-332-6",
    abstract = "Recent advances in test-time scaling have enabled Large Language Models (LLMs) to display sophisticated reasoning abilities via extended Chain-of-Thought (CoT) generation. Despite their impressive reasoning abilities, Large Reasoning Models (LRMs) frequently display unstable behaviors, e.g., hallucinating unsupported premises, overthinking simple tasks, and displaying higher sensitivity to prompt variations. This raises a deeper research question: $\textit{How can we represent the reasoning process of LRMs to map their minds?}$ To address this, we propose a unified graph-based analytical framework for fine-grained modeling and quantitative analysis of LRM reasoning dynamics. Our method first clusters long, verbose CoT outputs into semantically coherent reasoning steps, then constructs directed reasoning graphs to capture contextual and logical dependencies among these steps. Through a comprehensive analysis of derived reasoning graphs, we also reveal that key structural properties, such as exploration density, branching, and convergence ratios, strongly correlate with models' performance. The proposed framework enables quantitative evaluation of internal reasoning structure and quality beyond conventional metrics and also provides practical insights for prompt engineering and cognitive analysis of LLMs. Code and resources will be released to facilitate future research in this direction."
}Markdown (Informal)
[Mapping the Minds of LLMs: A Graph-Based Analysis of Reasoning LLMs](https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.896/) (Xiong et al., EMNLP 2025)
ACL