TORSO: Template-Oriented Reasoning Towards General Tasks

Minhyuk Kim, Seungyoon Lee, Heuiseok Lim


Abstract
The approaches that guide Large Language Models (LLMs) to emulate human reasoning during response generation have emerged as an effective method for enabling them to solve complex problems in a step-by-step manner, thereby achieving superior performance. However, most existing approaches using few-shot prompts to generate responses heavily depend on the provided examples, limiting the utilization of the model’s inherent reasoning capabilities. Moreover, constructing task-specific few-shot prompts is often costly and may lead to inconsistencies across different tasks. In this work, we introduce Template Oriented Reasoning (TORSO), which elicits the model to utilize internal reasoning abilities to generate proper responses across various tasks without the need for manually crafted few-shot examples. Our experimental results demonstrate that TORSO achieves strong performance on diverse LLMs benchmarks with reasonable rationales.
Anthology ID:
2025.emnlp-main.851
Volume:
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2025
Address:
Suzhou, China
Editors:
Christos Christodoulopoulos, Tanmoy Chakraborty, Carolyn Rose, Violet Peng
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
16821–16829
Language:
URL:
https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.851/
DOI:
Bibkey:
Cite (ACL):
Minhyuk Kim, Seungyoon Lee, and Heuiseok Lim. 2025. TORSO: Template-Oriented Reasoning Towards General Tasks. In Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing, pages 16821–16829, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
TORSO: Template-Oriented Reasoning Towards General Tasks (Kim et al., EMNLP 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.851.pdf
Checklist:
 2025.emnlp-main.851.checklist.pdf