Direct Behavior Optimization: Unlocking the Potential of Lightweight LLMs

Hongming Yang, Shi Lin, Jun Shao, Changting Lin, Donghai Zhu, Meng Han, Qinglei Kong


Abstract
Lightweight Large Language Models (LwLLMs) are reduced-parameter, optimized models designed to run efficiently on consumer-grade hardware, offering significant advantages in resource efficiency, cost-effectiveness, and data privacy. However, these models often struggle with limited inference and reasoning capabilities, which restrict their performance on complex tasks and limit their practical applicability. Moreover, existing prompt optimization methods typically rely on extensive manual effort or the meta-cognitive abilities of state-of-the-art LLMs, making them less effective for LwLLMs.To address these challenges, we introduce DeBoP, a new Direct Behavior Optimization Paradigm, original from the Chain-of-Thought (CoT) prompting technique. Unlike CoT Prompting, DeBoP is an automatic optimization method, which focuses on the optimization directly on the behavior of LwLLMs. In particular, DeBoP transforms the optimization of complex prompts into the optimization of discrete, quantifiable execution sequences using a gradient-free Monte Carlo Tree Search. We evaluate DeBoP on seven challenging tasks where state-of-the-art LLMs excel but LwLLMs generally underperform. Experimental results demonstrate that DeBoP significantly outperforms recent prompt optimization methods on most tasks. In particular, DeBoP-optimized LwLLMs surpass GPT-3.5 on most tasks while reducing computational time by approximately 60% compared to other automatic prompt optimization methods.
Anthology ID:
2025.findings-acl.998
Volume:
Findings of the Association for Computational Linguistics: ACL 2025
Month:
July
Year:
2025
Address:
Vienna, Austria
Editors:
Wanxiang Che, Joyce Nabende, Ekaterina Shutova, Mohammad Taher Pilehvar
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
19489–19515
Language:
URL:
https://preview.aclanthology.org/mtsummit-25-ingestion/2025.findings-acl.998/
DOI:
10.18653/v1/2025.findings-acl.998
Bibkey:
Cite (ACL):
Hongming Yang, Shi Lin, Jun Shao, Changting Lin, Donghai Zhu, Meng Han, and Qinglei Kong. 2025. Direct Behavior Optimization: Unlocking the Potential of Lightweight LLMs. In Findings of the Association for Computational Linguistics: ACL 2025, pages 19489–19515, Vienna, Austria. Association for Computational Linguistics.
Cite (Informal):
Direct Behavior Optimization: Unlocking the Potential of Lightweight LLMs (Yang et al., Findings 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/mtsummit-25-ingestion/2025.findings-acl.998.pdf