Meta-Reasoning Improves Tool Use in Large Language Models

Lisa Alazraki, Marek Rei


Abstract
External tools help large language models succeed at tasks where they would otherwise typically fail. In existing frameworks, choosing tools at test time relies on naive greedy decoding, regardless of whether the model has been fine-tuned on tool-annotated data or prompted with in-context examples. In contrast, we find that gathering and choosing among a suitable set of candidate tools has greater potential to lead to an optimal selection. We present Tool selECTion via meta-reasONing (TECTON), a two-phase system that first *reasons* over a task and outputs candidate tools using a custom fine-tuned language modelling head. Then, with the custom head disabled, it *meta-reasons* (i.e., it reasons over the previous reasoning process) to make a final choice. We show that TECTON results in substantial gains—both in-distribution and out-of-distribution—on a range of math reasoning datasets.
Anthology ID:
2025.findings-naacl.440
Volume:
Findings of the Association for Computational Linguistics: NAACL 2025
Month:
April
Year:
2025
Address:
Albuquerque, New Mexico
Editors:
Luis Chiruzzo, Alan Ritter, Lu Wang
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
7885–7897
Language:
URL:
https://preview.aclanthology.org/Ingest-2025-COMPUTEL/2025.findings-naacl.440/
DOI:
Bibkey:
Cite (ACL):
Lisa Alazraki and Marek Rei. 2025. Meta-Reasoning Improves Tool Use in Large Language Models. In Findings of the Association for Computational Linguistics: NAACL 2025, pages 7885–7897, Albuquerque, New Mexico. Association for Computational Linguistics.
Cite (Informal):
Meta-Reasoning Improves Tool Use in Large Language Models (Alazraki & Rei, Findings 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/Ingest-2025-COMPUTEL/2025.findings-naacl.440.pdf