Task-Aware Resolution Optimization for Visual Large Language Models

Weiqing Luo, Zhen Tan, Yifan Li, Xinyu Zhao, Kwonjoon Lee, Behzad Dariush, Tianlong Chen


Abstract
Real-world vision-language applications demand varying levels of perceptual granularity. However, most existing visual large language models (VLLMs), such as LLaVA, pre-assume a fixed resolution for downstream tasks, which leads to subpar performance. To address this problem, we first conduct a comprehensive and pioneering investigation into the resolution preferences of different vision-language tasks, revealing a correlation between resolution preferences with (1) image complexity, and (2) uncertainty variance of the VLLM at different image input resolutions. Building on this insight, we propose an empirical formula to determine the optimal resolution for a given vision-language task, accounting for these two factors as the zeroth-order and first-order terms in the Taylor expansion on a given image input. Second, based on rigorous experiments, we propose a novel parameter-efficient fine-tuning technique to extend the visual input resolution of pre-trained VLLMs to the identified optimal resolution. Extensive experiments on various vision-language tasks validate the effectiveness of our method.
Anthology ID:
2025.emnlp-main.795
Volume:
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2025
Address:
Suzhou, China
Editors:
Christos Christodoulopoulos, Tanmoy Chakraborty, Carolyn Rose, Violet Peng
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
15767–15781
Language:
URL:
https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.795/
DOI:
Bibkey:
Cite (ACL):
Weiqing Luo, Zhen Tan, Yifan Li, Xinyu Zhao, Kwonjoon Lee, Behzad Dariush, and Tianlong Chen. 2025. Task-Aware Resolution Optimization for Visual Large Language Models. In Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing, pages 15767–15781, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
Task-Aware Resolution Optimization for Visual Large Language Models (Luo et al., EMNLP 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.795.pdf
Checklist:
 2025.emnlp-main.795.checklist.pdf