Improving Influence-based Instruction Tuning Data Selection for Balanced Learning of Diverse Capabilities

Qirun Dai, Dylan Zhang, Jiaqi W. Ma, Hao Peng


Abstract
Selecting appropriate training data is crucial for instruction fine-tuning of large language models (LLMs), which aims to (1) elicit strong capabilities, and (2) achieve balanced performance across different tasks. Influence-based methods show promise in achieving (1), by estimating the contribution of each training example to the model’s predictions, but often struggle with (2). Our systematic investigation reveals that this underperformance can be attributed to an inherent bias, where some tasks intrinsically have greater influence than others. As a result, data selection is often biased towards these tasks, not only hurting the model’s performance on others but also, counterintuitively, harming performance on these high-influence tasks themselves. To address this, we propose BIDS, a Balanced and Influential Data Selection algorithm. BIDS first normalizes influence scores of the training data, and then iteratively chooses the training example with the highest influence on the most underrepresented task. Experiments with both Llama-3 and Mistral-v0.3 on seven benchmarks spanning five diverse capabilities show that BIDS consistently outperforms both state-of-the-art influence-based algorithms and other non-influence-based frameworks. Surprisingly, training on a 15% subset selected by BIDS can even outperform full-dataset training with a much more balanced performance. Our analysis highlights the importance of both instance-level normalization and iterative optimization of selected data for balanced learning of diverse capabilities.
Anthology ID:
2025.findings-emnlp.373
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2025
Month:
November
Year:
2025
Address:
Suzhou, China
Editors:
Christos Christodoulopoulos, Tanmoy Chakraborty, Carolyn Rose, Violet Peng
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
7079–7102
Language:
URL:
https://preview.aclanthology.org/author-page-yu-wang-polytechnic/2025.findings-emnlp.373/
DOI:
10.18653/v1/2025.findings-emnlp.373
Bibkey:
Cite (ACL):
Qirun Dai, Dylan Zhang, Jiaqi W. Ma, and Hao Peng. 2025. Improving Influence-based Instruction Tuning Data Selection for Balanced Learning of Diverse Capabilities. In Findings of the Association for Computational Linguistics: EMNLP 2025, pages 7079–7102, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
Improving Influence-based Instruction Tuning Data Selection for Balanced Learning of Diverse Capabilities (Dai et al., Findings 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/author-page-yu-wang-polytechnic/2025.findings-emnlp.373.pdf
Checklist:
 2025.findings-emnlp.373.checklist.pdf