Exploring LLM Priming Strategies for Few-Shot Stance Classification

Yamen Ajjour, Henning Wachsmuth


Abstract
Large language models (LLMs) are effective in predicting the labels of unseen target instances if instructed for the task and training instances via the prompt. LLMs generate a text with higher probability if the prompt contains text with similar characteristics, a phenomenon, called priming, that especially affects argumentation. An open question in NLP is how to systematically exploit priming to choose a set of instances suitable for a given task. For stance classification, LLMs may be primed with few-shot instances prior to identifying whether a given argument is pro or con a topic. In this paper, we explore two priming strategies for few-shot stance classification: one takes those instances that are most semantically similar, and the other chooses those that are most stance-similar. Experiments on three common stance datasets suggest that priming an LLM with stance-similar instances is particularly effective in few-shot stance classification compared to baseline strategies, and behaves largely consistently across different LLM variants.
Anthology ID:
2025.argmining-1.2
Volume:
Proceedings of the 12th Argument mining Workshop
Month:
July
Year:
2025
Address:
Vienna, Austria
Editors:
Elena Chistova, Philipp Cimiano, Shohreh Haddadan, Gabriella Lapesa, Ramon Ruiz-Dolz
Venues:
ArgMining | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
11–23
Language:
URL:
https://preview.aclanthology.org/landing_page/2025.argmining-1.2/
DOI:
10.18653/v1/2025.argmining-1.2
Bibkey:
Cite (ACL):
Yamen Ajjour and Henning Wachsmuth. 2025. Exploring LLM Priming Strategies for Few-Shot Stance Classification. In Proceedings of the 12th Argument mining Workshop, pages 11–23, Vienna, Austria. Association for Computational Linguistics.
Cite (Informal):
Exploring LLM Priming Strategies for Few-Shot Stance Classification (Ajjour & Wachsmuth, ArgMining 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/landing_page/2025.argmining-1.2.pdf