SOPL: A Sequential Optimal Learning Approach to Automated Prompt Engineering in Large Language Models

Shuyang Wang, Somayeh Moazeni, Diego Klabjan


Abstract
Designing effective prompts is essential to guiding large language models (LLMs) toward desired responses. Automated prompt engineering aims to reduce reliance on manual efforts by streamlining the design, refinement, and optimization of natural language prompts. This paper proposes an optimal learning framework for automated prompt engineering for black-box models, designed to sequentially identify effective prompt features under limited evaluation budgets. We introduce a feature-based method to express prompt templates, which significantly broadens the search space. Bayesian regression is employed to utilize correlations among similar prompts, accelerating the learning process. To efficiently explore the large space of prompt features, we adopt the forward-looking Knowledge-Gradient (KG) policy for sequential optimal learning efficiently by solving mixed-integer second-order cone optimization problems, making it scalable and capable of accommodating prompts characterized only through constraints. Our method significantly outperforms a set of benchmark strategies assessed on instruction induction tasks within limited iterations of prompt evaluations, showing the potential of optimal learning for efficient prompt learning.
Anthology ID:
2025.findings-emnlp.1155
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2025
Month:
November
Year:
2025
Address:
Suzhou, China
Editors:
Christos Christodoulopoulos, Tanmoy Chakraborty, Carolyn Rose, Violet Peng
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
21172–21185
Language:
URL:
https://preview.aclanthology.org/ingest-luhme/2025.findings-emnlp.1155/
DOI:
10.18653/v1/2025.findings-emnlp.1155
Bibkey:
Cite (ACL):
Shuyang Wang, Somayeh Moazeni, and Diego Klabjan. 2025. SOPL: A Sequential Optimal Learning Approach to Automated Prompt Engineering in Large Language Models. In Findings of the Association for Computational Linguistics: EMNLP 2025, pages 21172–21185, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
SOPL: A Sequential Optimal Learning Approach to Automated Prompt Engineering in Large Language Models (Wang et al., Findings 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-luhme/2025.findings-emnlp.1155.pdf
Checklist:
 2025.findings-emnlp.1155.checklist.pdf