Yangou Ouyang
2025
Beyond Similarity: A Gradient-based Graph Method for Instruction Tuning Data Selection
Yang Zhao
|
Li Du
|
Xiao Ding
|
Yangou Ouyang
|
Hepeng Wang
|
Kai Xiong
|
Jinglong Gao
|
Zhouhao Sun
|
Dongliang Xu
|
Qing Yang
|
Dongchen Li
|
Bing Qin
|
Ting Liu
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Large language models (LLMs) have shown great potential across various industries due to their remarkable ability to generalize through instruction tuning. However, the limited availability of domain-specific data significantly hampers their performance on specialized tasks. While existing methods primarily focus on selecting training data from general datasets that are similar to the target domain, they often fail to consider the joint distribution of instructions, resulting in inefficient learning and suboptimal knowledge transfer. To address these challenges, we introduce **G2IS** (**G**radient-based **G**raph **I**nstruction **S**election), a novel method that constructs a mixed gradient-based instruction graph to capture the joint distribution and interdependencies among instructions. By accounting for the relationships between instructions, G2IS improves domain adaptation efficiency. Additionally, we propose a gradient walk algorithm to refine the data selection process, enhancing both training effectiveness and efficiency. Our experiments demonstrate that G2IS outperforms traditional methods across various domain adaptation tasks, yielding significant performance gains, particularly in complex, data-scarce scenarios. These results underscore the potential of G2IS in advancing the development of large, domain-specific models.
Search
Fix author
Co-authors
- Xiao Ding 1
- Li Du 1
- Jinglong Gao 1
- Dongchen Li 1
- Ting Liu 1
- show all...
Venues
- acl1