Let The Jury Decide: Fair Demonstration Selection for In-Context Learning through Incremental Greedy Evaluation

Sadaf Md Halim, Chen Zhao, Xintao Wu, Latifur Khan, Christan Grant, Fariha Ishrat Rahman, Feng Chen


Abstract
Large Language Models (LLMs) are powerful in-context learners, achieving strong performance with just a few high-quality demonstrations. However, fairness concerns arise in many in-context classification tasks, especially when predictions involve sensitive attributes. To address this, we propose JUDGE—a simple yet effective framework for selecting fair and representative demonstrations that improve group fairness in In-Context Learning. JUDGE constructs the demonstration set iteratively using a greedy approach, guided by a small, carefully selected jury set. Our method remains robust across varying LLM architectures and datasets, ensuring consistent fairness improvements. We evaluate JUDGE on four datasets using four LLMs, comparing it against seven baselines. Results show that JUDGE consistently improves fairness metrics without compromising accuracy.
Anthology ID:
2025.findings-acl.968
Volume:
Findings of the Association for Computational Linguistics: ACL 2025
Month:
July
Year:
2025
Address:
Vienna, Austria
Editors:
Wanxiang Che, Joyce Nabende, Ekaterina Shutova, Mohammad Taher Pilehvar
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
18914–18931
Language:
URL:
https://preview.aclanthology.org/transition-to-people-yaml/2025.findings-acl.968/
DOI:
10.18653/v1/2025.findings-acl.968
Bibkey:
Cite (ACL):
Sadaf Md Halim, Chen Zhao, Xintao Wu, Latifur Khan, Christan Grant, Fariha Ishrat Rahman, and Feng Chen. 2025. Let The Jury Decide: Fair Demonstration Selection for In-Context Learning through Incremental Greedy Evaluation. In Findings of the Association for Computational Linguistics: ACL 2025, pages 18914–18931, Vienna, Austria. Association for Computational Linguistics.
Cite (Informal):
Let The Jury Decide: Fair Demonstration Selection for In-Context Learning through Incremental Greedy Evaluation (Halim et al., Findings 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/transition-to-people-yaml/2025.findings-acl.968.pdf