PRICoT: Principle Retrieval and Injection from Inference Successes and Failures for CoT Improvement

Yudai Yamazaki, Naoto Takeda, Yasutaka Nishimura, Kazushi Ikeda


Abstract
In-Context Learning (ICL) approaches, such as Zero-Shot and Few-Shot prompting, allow Large Language Models (LLMs) to tackle reasoning tasks without additional fine-tuning. However, Zero-Shot prompting often struggles with more complex tasks, whereas Few-Shot prompting demands considerable manual effort and domain expertise to design effective prompts. Although existing work has attempted to alleviate these issues by extracting reasoning rules from carefully crafted, task-specific representative examples, creating or obtaining such examples can be impractical in real-world scenarios. In this paper, we propose a novel approach that enhances the inference accuracy by injecting reasoning principles extracted from QA data, without relying on representative Few-Shot exemplars. This offers a lightweight yet adaptive way to boost accuracy on complex reasoning tasks, while avoiding manual effort and the high exploration costs typical of prior methods. Experiments on benchmarks show that, using GPT-4o, our method outperforms similarity-based Few-Shot and Zero-Shot prompting methods on challenging benchmarks such as GPQA-diamond, achieving an absolute accuracy improvement of up to 2% in scenarios where carefully crafted Few-Shot examples are unavailable.
Anthology ID:
2025.inlg-main.35
Volume:
Proceedings of the 18th International Natural Language Generation Conference
Month:
October
Year:
2025
Address:
Hanoi, Vietnam
Editors:
Lucie Flek, Shashi Narayan, Lê Hồng Phương, Jiahuan Pei
Venue:
INLG
SIG:
SIGGEN
Publisher:
Association for Computational Linguistics
Note:
Pages:
576–595
Language:
URL:
https://preview.aclanthology.org/author-page-lei-gao-usc/2025.inlg-main.35/
DOI:
Bibkey:
Cite (ACL):
Yudai Yamazaki, Naoto Takeda, Yasutaka Nishimura, and Kazushi Ikeda. 2025. PRICoT: Principle Retrieval and Injection from Inference Successes and Failures for CoT Improvement. In Proceedings of the 18th International Natural Language Generation Conference, pages 576–595, Hanoi, Vietnam. Association for Computational Linguistics.
Cite (Informal):
PRICoT: Principle Retrieval and Injection from Inference Successes and Failures for CoT Improvement (Yamazaki et al., INLG 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/author-page-lei-gao-usc/2025.inlg-main.35.pdf