Hepeng Wang
2025
Beyond Similarity: A Gradient-based Graph Method for Instruction Tuning Data Selection
Yang Zhao
|
Li Du
|
Xiao Ding
|
Yangou Ouyang
|
Hepeng Wang
|
Kai Xiong
|
Jinglong Gao
|
Zhouhao Sun
|
Dongliang Xu
|
Qing Yang
|
Dongchen Li
|
Bing Qin
|
Ting Liu
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Large language models (LLMs) have shown great potential across various industries due to their remarkable ability to generalize through instruction tuning. However, the limited availability of domain-specific data significantly hampers their performance on specialized tasks. While existing methods primarily focus on selecting training data from general datasets that are similar to the target domain, they often fail to consider the joint distribution of instructions, resulting in inefficient learning and suboptimal knowledge transfer. To address these challenges, we introduce **G2IS** (**G**radient-based **G**raph **I**nstruction **S**election), a novel method that constructs a mixed gradient-based instruction graph to capture the joint distribution and interdependencies among instructions. By accounting for the relationships between instructions, G2IS improves domain adaptation efficiency. Additionally, we propose a gradient walk algorithm to refine the data selection process, enhancing both training effectiveness and efficiency. Our experiments demonstrate that G2IS outperforms traditional methods across various domain adaptation tasks, yielding significant performance gains, particularly in complex, data-scarce scenarios. These results underscore the potential of G2IS in advancing the development of large, domain-specific models.
2024
Self-Evolving GPT: A Lifelong Autonomous Experiential Learner
Jinglong Gao
|
Xiao Ding
|
Yiming Cui
|
Jianbai Zhao
|
Hepeng Wang
|
Ting Liu
|
Bing Qin
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
To improve the performance of large language models (LLMs), researchers have explored providing LLMs with textual task-solving experience via prompts. However, they rely on manual efforts to acquire and apply such experience for each task, which is not feasible for the growing demand for LLMs and the variety of user questions.To address this issue, we design a lifelong autonomous experiential learning framework based on LLMs to explore whether LLMs can imitate human ability for learning and utilizing experience. It autonomously learns and accumulates experience through experience transfer and induction, categorizing the types of input questions to select which accumulated experience to employ for them.Experimental results on six widely used NLP datasets show that our framework performs reliably in each intermediate step and effectively improves the performance of GPT-3.5 and GPT-4. This validates the feasibility of using LLMs to mimic human experiential learning and application capabilities, offering a new path worth further exploration for the evolution of machine intelligence. Additionally, we provide a detailed analysis of the behavior of our framework at each step.We will open source codes after the acceptance, fostering open research in the NLP community and beyond.
Search
Fix author
Co-authors
- Xiao Ding 2
- Jinglong Gao 2
- Ting Liu 2
- Bing Qin (秦兵) 2
- Yiming Cui 1
- show all...
Venues
- acl2