Add-One-In: Incremental Sample Selection for Large Language Models via a Choice-Based Greedy Paradigm

Zhuo Li, Yuhao Du, Xiaoqi Jiao, Steven Y. Guo, Yuege Feng, Xiang Wan, Anningzhe Gao, Jinpeng Hu


Abstract
Selecting high-quality and diverse training samples from extensive datasets plays a crucial role in reducing training overhead and enhancing the performance of Large Language Models (LLMs). However, existing studies fall short in assessing the overall value of selected data, focusing primarily on individual quality, and struggle to strike an effective balance between ensuring diversity and minimizing data point traversals. Therefore, this paper introduces a novel choice-based sample selection framework that shifts the focus from evaluating individual sample quality to comparing the contribution value of different samples when incorporated into the subset. Thanks to the advanced language understanding capabilities of LLMs, we utilize LLMs to evaluate the value of each option during the selection process. Furthermore, we design a greedy sampling process where samples are incrementally added to the subset, thereby improving efficiency by eliminating the need for exhaustive traversal of the entire dataset with the limited budget. Extensive experiments demonstrate that selected data from our method not only surpasses the performance of the full dataset but also achieves competitive results with recent powerful studies, while requiring fewer selections. Moreover, we validate our approach on a larger medical dataset, highlighting its practical applicability in real-world applications.
Anthology ID:
2025.emnlp-main.270
Volume:
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2025
Address:
Suzhou, China
Editors:
Christos Christodoulopoulos, Tanmoy Chakraborty, Carolyn Rose, Violet Peng
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
5321–5340
Language:
URL:
https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.270/
DOI:
Bibkey:
Cite (ACL):
Zhuo Li, Yuhao Du, Xiaoqi Jiao, Steven Y. Guo, Yuege Feng, Xiang Wan, Anningzhe Gao, and Jinpeng Hu. 2025. Add-One-In: Incremental Sample Selection for Large Language Models via a Choice-Based Greedy Paradigm. In Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing, pages 5321–5340, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
Add-One-In: Incremental Sample Selection for Large Language Models via a Choice-Based Greedy Paradigm (Li et al., EMNLP 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.270.pdf
Checklist:
 2025.emnlp-main.270.checklist.pdf