Hansi Yang
2025
Corrupted but Not Broken: Understanding and Mitigating the Negative Impacts of Corrupted Data in Visual Instruction Tuning
Yunhao Gou
|
Hansi Yang
|
Zhili Liu
|
Kai Chen
|
Yihan Zeng
|
Lanqing Hong
|
Zhenguo Li
|
Qun Liu
|
Bo Han
|
James Kwok
|
Yu Zhang
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Visual Instruction Tuning (VIT) aims to enhance Multimodal Large Language Models (MLLMs), yet its effectiveness is often compromised by corrupted datasets with issues such as hallucinated content, incorrect responses, and poor OCR quality. Previous approaches to address these challenges have focused on refining datasets through high-quality data collection or rule-based filtering that can be costly or limited in scope. In this paper, we conduct a systematic investigation into the impact of corrupted data on MLLMs and discover that, although corrupted data degrade model performance, such adverse effects are largely reversible, and MLLMs are corrupted but not broken. Specifically, we find that disabling a small subset of parameters can almost fully restore performance. Moreover, corrupted MLLMs inherently possess the capability to differentiate between clean and corrupted samples, facilitating dataset cleaning without external intervention. Building on these insights, we introduce a corruption-robust training paradigm that significantly surpasses existing strategies for mitigating the effects of corrupted data.
Search
Fix author
Co-authors
- Kai Chen 1
- Yunhao Gou 1
- Bo Han 1
- Lanqing Hong 1
- James Kwok 1
- show all...