Where to show Demos in Your Prompt: A Positional Bias of In-Context Learning

Kwesi Adu Cobbina, Tianyi Zhou


Abstract
In-context learning (ICL) is a critical emerging capability of large language models (LLMs), enabling few-shot learning during inference by including a few demonstrations (demos) in the prompt. However, it has been found that ICL’s performance can be sensitive to the choices of demos and their order. This paper investigates an unexplored new positional bias of ICL for the first time: we observe that the predictions and accuracy can drift drastically when the positions of demos, system prompt, and user message in LLM input are varied. This bias, we refer to as DEMOS’ POSITION IN PROMPT bias (DPP bias). We design a systematic evaluation pipeline to study this type of positional bias across classification, QA, summarization, and reasoning tasks. We introduce two metrics, ACCURACY-CHANGE and PREDICTION-CHANGE, to quantify net gains and output volatility induced by demos’ position change. Extensive experiments on tenLLMs from four open-source model families(QWEN, LLAMA3, MISTRAL, COHERE) verify that the bias significantly affects their accuracy and predictions: placing demos at the start of prompt yields the most stable and accurate outputs with gains of up to +6 points. In contrast, placing demos at the end of the user message flips over 30% of predictions without improving correctness in QA tasks. Smaller models are most affected by this sensitivity, though even large models do remain marginally affected on complex tasks.
Anthology ID:
2025.emnlp-main.1503
Volume:
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2025
Address:
Suzhou, China
Editors:
Christos Christodoulopoulos, Tanmoy Chakraborty, Carolyn Rose, Violet Peng
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
29548–29581
Language:
URL:
https://preview.aclanthology.org/author-page-yu-wang-polytechnic/2025.emnlp-main.1503/
DOI:
10.18653/v1/2025.emnlp-main.1503
Bibkey:
Cite (ACL):
Kwesi Adu Cobbina and Tianyi Zhou. 2025. Where to show Demos in Your Prompt: A Positional Bias of In-Context Learning. In Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing, pages 29548–29581, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
Where to show Demos in Your Prompt: A Positional Bias of In-Context Learning (Cobbina & Zhou, EMNLP 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/author-page-yu-wang-polytechnic/2025.emnlp-main.1503.pdf
Checklist:
 2025.emnlp-main.1503.checklist.pdf