A Joint Optimization Framework for Enhancing Efficiency of Tool Utilization in LLM Agents

Bin Wu, Edgar Meij, Emine Yilmaz


Abstract
Large Language Models (LLMs) augmented with external tools have demonstrated remarkable capabilities in complex problem solving. Existing efforts for tool utilization typically involve an LLM agent that contains instructions on using the description of the available tools to determine and call the tools required to solve the problem. Inference Scaling techniques, such as chain-of-thought and tree-of-thought reasoning, are commonly used but require significant computational overhead and rendering such methods impractical in real-world applications. In this work, we recognize and formalize the critical role of instructions provided in agent prompts and tool descriptions—collectively referred to as *context*—and show that incomplete *context* is one of the reasons for this computational overhead.To fill this efficiency gap, we propose an optimization framework that jointly refines both the instructions provided in the agent prompt and tool description, enhancing their interaction. Experiments on StableToolBench and RestBench demonstrate that our optimized agents achieve superior efficiency while maintaining effectiveness. Our findings underscore the critical role of context optimization in improving LLM agents for tool utilization, paving the way for more responsive and cost-effective LLM agents. Our code is available at [https://github.com/Bingo-W/ToolOptimization](https://github.com/Bingo-W/ToolOptimization).
Anthology ID:
2025.findings-acl.1149
Volume:
Findings of the Association for Computational Linguistics: ACL 2025
Month:
July
Year:
2025
Address:
Vienna, Austria
Editors:
Wanxiang Che, Joyce Nabende, Ekaterina Shutova, Mohammad Taher Pilehvar
Venues:
Findings | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
22361–22373
Language:
URL:
https://preview.aclanthology.org/acl25-workshop-ingestion/2025.findings-acl.1149/
DOI:
Bibkey:
Cite (ACL):
Bin Wu, Edgar Meij, and Emine Yilmaz. 2025. A Joint Optimization Framework for Enhancing Efficiency of Tool Utilization in LLM Agents. In Findings of the Association for Computational Linguistics: ACL 2025, pages 22361–22373, Vienna, Austria. Association for Computational Linguistics.
Cite (Informal):
A Joint Optimization Framework for Enhancing Efficiency of Tool Utilization in LLM Agents (Wu et al., Findings 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/acl25-workshop-ingestion/2025.findings-acl.1149.pdf