S2LPP: Small-to-Large Prompt Prediction across LLMs

Liang Cheng, Tianyi Li, Zhaowei Wang, Mark Steedman


Abstract
The performance of pre-trained Large Language Models (LLMs) is often sensitive to nuances in prompt templates, requiring careful prompt engineering, adding costs in terms of computing and human effort. In this study, we present experiments encompassing multiple LLMs variants of varying sizes aimed at probing their preference with different prompts. Through experiments on Question Answering, we show prompt preference consistency across LLMs of different sizes. We also show that this consistency extends to other tasks, such as Natural Language Inference. Utilizing this consistency, we propose a method to use a smaller model to select effective prompt templates for a larger model. We show that our method substantially reduces the cost of prompt engineering while consistently matching performance with optimal prompts among candidates. More importantly, our experiment shows the efficacy of our strategy across fourteen LLMs and its applicability to a broad range of NLP tasks, highlighting its robustness.
Anthology ID:
2025.findings-emnlp.483
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2025
Month:
November
Year:
2025
Address:
Suzhou, China
Editors:
Christos Christodoulopoulos, Tanmoy Chakraborty, Carolyn Rose, Violet Peng
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
9100–9115
Language:
URL:
https://preview.aclanthology.org/ingest-luhme/2025.findings-emnlp.483/
DOI:
10.18653/v1/2025.findings-emnlp.483
Bibkey:
Cite (ACL):
Liang Cheng, Tianyi Li, Zhaowei Wang, and Mark Steedman. 2025. S2LPP: Small-to-Large Prompt Prediction across LLMs. In Findings of the Association for Computational Linguistics: EMNLP 2025, pages 9100–9115, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
S2LPP: Small-to-Large Prompt Prediction across LLMs (Cheng et al., Findings 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-luhme/2025.findings-emnlp.483.pdf
Checklist:
 2025.findings-emnlp.483.checklist.pdf