Xin Jin
Other people with similar names: Xin Jin
2025
Multimodal Language Models See Better When They Look Shallower
Haoran Chen
|
Junyan Lin
|
Xinghao Chen
|
Yue Fan
|
Jianfeng Dong
|
Xin Jin
|
Hui Su
|
Jinlan Fu
|
Xiaoyu Shen
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Multimodal large language models (MLLMs) typically extract visual features from the final layers of a pretrained Vision Transformer (ViT). This widespread deep-layer bias, however, is largely driven by empirical convention rather than principled analysis. While prior studies suggest that different ViT layers capture different types of information—shallower layers focusing on fine visual details and deeper layers aligning more closely with textual semantics, the impact of this variation on MLLM performance remains underexplored. We present the first comprehensive study of visual layer selection for MLLMs, analyzing representation similarity across ViT layers to establish shallow, middle, and deep layer groupings. Through extensive evaluation of MLLMs (1.4B–7B parameters) across 10 benchmarks encompassing 60+ tasks, we find that while deep layers excel in semantic-rich tasks like OCR, shallow and middle layers significantly outperform them on fine-grained visual tasks including counting, positioning, and object localization. Building on these insights, we propose a lightweight feature fusion method that strategically incorporates shallower layers, achieving consistent improvements over both single-layer and specialized fusion baselines. Our work offers the first principled study of visual layer selection in MLLMs, showing that MLLMs can often see better when they look shallower.
Search
Fix author
Co-authors
- Haoran Chen 1
- Xinghao Chen 1
- Jianfeng Dong 1
- Yue Fan 1
- Jinlan Fu 1
- show all...