Xiaobo Xia
2025
MMEvol: Empowering Multimodal Large Language Models with Evol-Instruct
Run Luo
|
Haonan Zhang
|
Longze Chen
|
Ting-En Lin
|
Xiong Liu
|
Yuchuan Wu
|
Min Yang
|
Yongbin Li
|
Minzheng Wang
|
Pengpeng Zeng
|
Lianli Gao
|
Heng Tao Shen
|
Yunshui Li
|
Hamid Alinejad-Rokny
|
Xiaobo Xia
|
Jingkuan Song
|
Fei Huang
Findings of the Association for Computational Linguistics: ACL 2025
The development of Multimodal Large Language Models (MLLMs) has seen significant progress, driven by increasing demands across various fields (e.g., multimodal agents, embodied intelligence). While model-driven approaches aim to enhance MLLM capabilities through diverse architectures, their performance gains have become increasingly marginal. In contrast, data-driven methods, which scale up image-text instruction datasets, have proven more effective but face challenges related to limited data diversity and complexity. The absence of high-quality instruction data remains a major bottleneck in MLLM development. To address this issue, we propose , a novel multimodal instruction data evolution framework. This framework iteratively enhances data quality through a refined combination of fine-grained perception, cognitive reasoning, and interaction evolution, generating a more complex and diverse image-text instruction dataset that significantly improves MLLM capabilities. Starting with an initial dataset, SEED-163K, we employ to systematically expand instruction diversity, extend visual reasoning steps to improve cognitive abilities, and extract fine-grained visual details to enhance understanding and robustness. To rigorously evaluate our approach, we conduct extensive qualitative analysis and quantitative experiments across 13 vision-language tasks. Compared to baseline models trained on the original seed dataset, our method achieves an average accuracy improvement of 3.1 percentage points. Moreover, our approach attains state-of-the-art (SOTA) performance in nine tasks while using significantly less data than existing state-of-the-art models.
2024
One-Shot Learning as Instruction Data Prospector for Large Language Models
Yunshui Li
|
Binyuan Hui
|
Xiaobo Xia
|
Jiaxi Yang
|
Min Yang
|
Lei Zhang
|
Shuzheng Si
|
Ling-Hao Chen
|
Junhao Liu
|
Tongliang Liu
|
Fei Huang
|
Yongbin Li
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Contemporary practices in instruction tuning often hinge on enlarging data scaling without a clear strategy for ensuring data quality, inadvertently introducing noise that may compromise model performance. To address this challenge, we introduce Nuggets, a novel and efficient methodology that leverages one-shot learning to discern and select high-quality instruction data from extensive datasets. Nuggets assesses the potential of individual instruction examples to act as effective one-shot learning instances, thereby identifying those that can significantly improve performance across diverse tasks. Nuggets utilizes a scoring system based on the impact of candidate examples on the perplexity of a diverse anchor set, facilitating the selection of the most advantageous data for instruction tuning. Through rigorous evaluations on two benchmarks, namely MT-Bench and Alpaca-Eval, our study illustrates that instruction tuning with the top 1% of examples curated by Nuggets substantially outperforms conventional methods employing the entire dataset.
Search
Fix author
Co-authors
- Fei Huang 2
- Yunshui Li 2
- Yongbin Li 2
- Min Yang 2
- Hamid Alinejad-Rokny 1
- show all...