@inproceedings{wang-xu-2025-thoughtprobe,
    title = "{T}hought{P}robe: Classifier-Guided {LLM} Thought Space Exploration via Probing Representations",
    author = "Wang, Zijian  and
      Xu, Chang",
    editor = "Christodoulopoulos, Christos  and
      Chakraborty, Tanmoy  and
      Rose, Carolyn  and
      Peng, Violet",
    booktitle = "Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing",
    month = nov,
    year = "2025",
    address = "Suzhou, China",
    publisher = "Association for Computational Linguistics",
    url = "https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.307/",
    pages = "6029--6050",
    ISBN = "979-8-89176-332-6",
    abstract = "This paper introduces ThoughtProbe, a novel inference-time framework that leverages the hidden reasoning features of Large Language Models (LLMs) to improve their reasoning performance. Unlike previous works that manipulate the hidden representations to steer LLM generation, we harness them as discriminative signals to guide the tree-structured response space exploration. In each node expansion, a classifier serves as a scoring and ranking mechanism that efficiently allocates computational resources by prioritizing higher score candidates for continuation. After completing the tree expansion, we collect answers from all branches to form a candidate answer pool. We then propose a branch-aggregation method that marginalizes over all supporting branches by aggregating their CoT scores, thereby identifying the optimal answer from the pool. Experimental results show that our framework{'}s comprehensive exploration not only covers valid reasoning chains but also effectively identifies them, achieving significant improvements across multiple arithmetic reasoning benchmarks."
}Markdown (Informal)
[ThoughtProbe: Classifier-Guided LLM Thought Space Exploration via Probing Representations](https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.307/) (Wang & Xu, EMNLP 2025)
ACL