Self-Training Large Language Models for Tool-Use Without Demonstrations
Ne Luo, Aryo Pradipta Gema, Xuanli He, Emile Van Krieken, Pietro Lesci, Pasquale Minervini
Abstract
Large language models (LLMs) remain prone to factual inaccuracies and computational errors, including hallucinations and mistakes in mathematical reasoning. Recent work augmented LLMs with tools to mitigate these shortcomings, but often requires curated gold tool-use demonstrations. In this paper, we investigate whether LLMs can learn to use tools without demonstrations. First, we analyse zero-shot prompting strategies to guide LLMs in tool utilisation. Second, we propose a self-training method to synthesise tool-use traces using the LLM itself. We compare supervised fine-tuning and preference fine-tuning techniques for fine-tuning the model on datasets constructed using existing Question Answering (QA) datasets, i.e., TriviaQA and GSM8K. Experiments show that tool-use enhances performance on a long-tail knowledge task: 3.7% on PopQA, which is used solely for evaluation, but leads to mixed results on other datasets, i.e., TriviaQA, GSM8K, and NQ-Open. Our findings highlight the potential and challenges of integrating external tools into LLMs without demonstrations.- Anthology ID:
- 2025.findings-naacl.69
- Volume:
- Findings of the Association for Computational Linguistics: NAACL 2025
- Month:
- April
- Year:
- 2025
- Address:
- Albuquerque, New Mexico
- Editors:
- Luis Chiruzzo, Alan Ritter, Lu Wang
- Venue:
- Findings
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 1253–1271
- Language:
- URL:
- https://preview.aclanthology.org/Ingest-2025-COMPUTEL/2025.findings-naacl.69/
- DOI:
- Cite (ACL):
- Ne Luo, Aryo Pradipta Gema, Xuanli He, Emile Van Krieken, Pietro Lesci, and Pasquale Minervini. 2025. Self-Training Large Language Models for Tool-Use Without Demonstrations. In Findings of the Association for Computational Linguistics: NAACL 2025, pages 1253–1271, Albuquerque, New Mexico. Association for Computational Linguistics.
- Cite (Informal):
- Self-Training Large Language Models for Tool-Use Without Demonstrations (Luo et al., Findings 2025)
- PDF:
- https://preview.aclanthology.org/Ingest-2025-COMPUTEL/2025.findings-naacl.69.pdf