Chain of Methodologies: Scaling Test Time Computation without Training

Cong Liu, Jie Wu, Weigang Wu, Xu Chen, Liang Lin, Wei-Shi Zheng


Abstract
Large Language Models (LLMs) often struggle with complex reasoning tasks due to insufficient in-depth insights in their training data, which are frequently absent in publicly available documents. This paper introduces the Chain of Methodologies (CoM), a simple and innovative iterative prompting framework designed to build structured reasoning processes by injecting human methodological insights, thereby enabling LLMs to perform long and effective reasoning for complex tasks. Assuming that LLMs possess certain metacognitive abilities, CoM leverages user-defined methodologies to stimulate the cognitive insights that LLMs have learned implicitly from training data. Experimental results indicate that CoM outperforms competitive baselines, highlighting the potential of training-free prompting methods as general solutions for complex reasoning tasks and the possibility of incorporating human-like methodological insights to bridge the gap to human-level reasoning.
Anthology ID:
2025.findings-acl.276
Volume:
Findings of the Association for Computational Linguistics: ACL 2025
Month:
July
Year:
2025
Address:
Vienna, Austria
Editors:
Wanxiang Che, Joyce Nabende, Ekaterina Shutova, Mohammad Taher Pilehvar
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
5298–5312
Language:
URL:
https://preview.aclanthology.org/corrections-2025-08/2025.findings-acl.276/
DOI:
10.18653/v1/2025.findings-acl.276
Bibkey:
Cite (ACL):
Cong Liu, Jie Wu, Weigang Wu, Xu Chen, Liang Lin, and Wei-Shi Zheng. 2025. Chain of Methodologies: Scaling Test Time Computation without Training. In Findings of the Association for Computational Linguistics: ACL 2025, pages 5298–5312, Vienna, Austria. Association for Computational Linguistics.
Cite (Informal):
Chain of Methodologies: Scaling Test Time Computation without Training (Liu et al., Findings 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/corrections-2025-08/2025.findings-acl.276.pdf