Wenjing Xie
2025
From Awareness to Adaptability: Enhancing Tool Utilization for Scientific Reasoning
Wenjing Xie
|
Xiaobo Liang
|
Juntao Li
|
Wanfu Wang
|
Kehai Chen
|
Qiaoming Zhu
|
Min Zhang
Findings of the Association for Computational Linguistics: ACL 2025
As large language models (LLMs) are increasingly applied to complex scientific problem-solving, their effectiveness is often limited by unconscious or failed tool usage. To address this issue, we introduce the Tool-Awareness Training (TAT) method, designed to enhance scientific reasoning. This approach leverages both forward and backward data generation strategies to strengthen the model’s conscious and selective tool utilization in multi-step reasoning tasks. Our method unfolds in three stages: (1) developing tool-knowledge through backward tooluse data generation (2) enhancing tool-awareness in multi-step reasoning by utilizing forward reasoning data, and (3) improving domain adaptability through large-scale domain-specific data for multi-task learning. These three stages progressively establish the foundation for tool learning and scientific reasoning, effectively integrating both, enabling the model to tackle multi-domain scientific tasks while optimizing tool usage. Our experimental results demonstrate that TAT significantly enhances LLM performance in mathematical and scientific reasoning tasks, particularly by improving the model’s tool utilization capabilities, including proactivity and execution success rates.
Search
Fix author
Co-authors
- Kehai Chen 1
- Juntao Li 1
- Xiaobo Liang 1
- Wanfu Wang 1
- Min Zhang (张民) 1
- show all...