Large Language Models Might Not Care What You Are Saying: Prompt Format Beats Descriptions

Chenming Tang, Zhixiang Wang, Hao Sun, Yunfang Wu


Abstract
With the help of in-context learning (ICL), large language models (LLMs) have achieved impressive performance across various tasks. However, the function of descriptive instructions during ICL remains under-explored. In this work, we propose an ensemble prompt framework to describe the selection criteria of multiple in-context examples, and preliminary experiments on machine translation (MT) across six translation directions confirm that this framework boosts ICL performance. But to our surprise, LLMs might not care what the descriptions actually say, and the performance gain is primarily caused by the ensemble format, since it could lead to improvement even with random descriptive nouns. We further apply this new ensemble framework on a range of commonsense, math, logical reasoning and hallucination tasks with three LLMs and achieve promising results, suggesting again that designing a proper prompt format would be much more effective and efficient than paying effort into specific descriptions.
Anthology ID:
2025.findings-emnlp.3
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2025
Month:
November
Year:
2025
Address:
Suzhou, China
Editors:
Christos Christodoulopoulos, Tanmoy Chakraborty, Carolyn Rose, Violet Peng
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
26–48
Language:
URL:
https://preview.aclanthology.org/author-page-yu-wang-polytechnic/2025.findings-emnlp.3/
DOI:
10.18653/v1/2025.findings-emnlp.3
Bibkey:
Cite (ACL):
Chenming Tang, Zhixiang Wang, Hao Sun, and Yunfang Wu. 2025. Large Language Models Might Not Care What You Are Saying: Prompt Format Beats Descriptions. In Findings of the Association for Computational Linguistics: EMNLP 2025, pages 26–48, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
Large Language Models Might Not Care What You Are Saying: Prompt Format Beats Descriptions (Tang et al., Findings 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/author-page-yu-wang-polytechnic/2025.findings-emnlp.3.pdf
Checklist:
 2025.findings-emnlp.3.checklist.pdf