CrowdSelect: SyntheticInstruction Data Selection with Multi-LLM Wisdom

Yisen Li, Lingfeng Yang, Wenxuan Shen, Pan Zhou, Yao Wan, Weiwei Lin, Dongping Chen


Abstract
Distilling advanced Large Language Models’ instruction-following capabilities into smaller models using a selected subset has become a mainstream approach in model training. While existing synthetic instruction data selection strategies rely mainly on single-dimensional signals (i.e., reward scores, model perplexity), they fail to capture the complexity of instruction-following across diverse fields. Therefore, we investigate more diverse signals to capture comprehensive instruction-response pair characteristics and propose three foundational metrics that leverage Multi-LLMs wisdom, informed by (1) diverse LLM responses and (2) reward model assessment. Building upon base metrics, we propose CrowdSelect, an integrated metric incorporating a clustering-based approach to maintain response diversity. Our comprehensive experiments demonstrate that our foundation metrics consistently improve performance across 4 base models on MT-bench and Arena-Hard. CrowdSelect, efficiently incorporating all metrics, achieves state-of-the-art performance in both Full and LoRA fine-tuning, showing improvements of 4.81% on Arena-Hard and 11.1% on MT-bench with Llama-3.2-3b-instruct. We hope our findings will bring valuable insights for future research in this direction.
Anthology ID:
2026.findings-eacl.79
Volume:
Findings of the Association for Computational Linguistics: EACL 2026
Month:
March
Year:
2026
Address:
Rabat, Morocco
Editors:
Vera Demberg, Kentaro Inui, Lluís Marquez
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1542–1569
Language:
URL:
https://preview.aclanthology.org/manual-author-scripts/2026.findings-eacl.79/
DOI:
Bibkey:
Cite (ACL):
Yisen Li, Lingfeng Yang, Wenxuan Shen, Pan Zhou, Yao Wan, Weiwei Lin, and Dongping Chen. 2026. CrowdSelect: SyntheticInstruction Data Selection with Multi-LLM Wisdom. In Findings of the Association for Computational Linguistics: EACL 2026, pages 1542–1569, Rabat, Morocco. Association for Computational Linguistics.
Cite (Informal):
CrowdSelect: SyntheticInstruction Data Selection with Multi-LLM Wisdom (Li et al., Findings 2026)
Copy Citation:
PDF:
https://preview.aclanthology.org/manual-author-scripts/2026.findings-eacl.79.pdf
Checklist:
 2026.findings-eacl.79.checklist.pdf