P3: Prompts Promote Prompting

Xinyu Zhang, Yuanquan Hu, Fangchao Liu, Zhicheng Dou


Abstract
Current large language model (LLM) applications often employ multi-component prompts, comprising both system and user prompts, to guide model behaviors. While recent advancements have demonstrated the efficacy of automatically optimizing either the system or user prompt to boost performance, such unilateral approaches often yield suboptimal outcomes due to the interdependent nature of these components. In this work, we introduce P3, a novel self-improvement framework that concurrently optimizes both system and user prompts through an iterative process. The offline optimized prompts are further leveraged to promote online prompting by performing query-dependent prompt optimization. Extensive experiments on general tasks (e.g., Arena-hard and Alpaca-eval) and reasoning tasks (e.g., GSM8K and GPQA) demonstrate that P3 achieves superior performance in the realm of automatic prompt optimization. Our results highlight the effectiveness of a holistic optimization strategy in enhancing LLM performance across diverse domains.
Anthology ID:
2025.findings-acl.618
Volume:
Findings of the Association for Computational Linguistics: ACL 2025
Month:
July
Year:
2025
Address:
Vienna, Austria
Editors:
Wanxiang Che, Joyce Nabende, Ekaterina Shutova, Mohammad Taher Pilehvar
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
11948–11965
Language:
URL:
https://preview.aclanthology.org/mtsummit-25-ingestion/2025.findings-acl.618/
DOI:
10.18653/v1/2025.findings-acl.618
Bibkey:
Cite (ACL):
Xinyu Zhang, Yuanquan Hu, Fangchao Liu, and Zhicheng Dou. 2025. P3: Prompts Promote Prompting. In Findings of the Association for Computational Linguistics: ACL 2025, pages 11948–11965, Vienna, Austria. Association for Computational Linguistics.
Cite (Informal):
P3: Prompts Promote Prompting (Zhang et al., Findings 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/mtsummit-25-ingestion/2025.findings-acl.618.pdf