D-CoDe: Scaling Image-Pretrained VLMs to Video via Dynamic Compression and Question Decomposition

Yiyang Huang, Yizhou Wang, Yun Fu


Abstract
Video large language models (Vid-LLMs), which excel in diverse video-language tasks, can be effectively constructed by adapting image-pretrained vision-language models (VLMs). However, this adaptation remains challenging, as it requires processing dense and temporally extended visual inputs that exceed the capacity of image-based models. This paper identifies the perception bottleneck and token overload as key challenges in extending image-based VLMs to the video domain. To address these issues, we propose D-CoDe, a training-free adaptation framework that incorporates dynamic compression and question decomposition. Specifically, dynamic compression alleviates the perception bottleneck through adaptive selection of representative frames and content-aware aggregation of spatial tokens, thereby reducing redundancy while preserving informative content. In parallel, question decomposition mitigates token overload by reformulating the original query into sub-questions, guiding the model to focus on distinct aspects of the video and enabling more comprehensive understanding. Experiments demonstrate that D-CoDe effectively improves video understanding across various benchmarks. Furthermore, strong performance on the challenging long-video benchmark highlights the potential of D-CoDe in handling complex video-language tasks. Code is available at https://github.com/hukcc/D-CoDe.
Anthology ID:
2025.emnlp-main.597
Volume:
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2025
Address:
Suzhou, China
Editors:
Christos Christodoulopoulos, Tanmoy Chakraborty, Carolyn Rose, Violet Peng
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
11809–11822
Language:
URL:
https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.597/
DOI:
Bibkey:
Cite (ACL):
Yiyang Huang, Yizhou Wang, and Yun Fu. 2025. D-CoDe: Scaling Image-Pretrained VLMs to Video via Dynamic Compression and Question Decomposition. In Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing, pages 11809–11822, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
D-CoDe: Scaling Image-Pretrained VLMs to Video via Dynamic Compression and Question Decomposition (Huang et al., EMNLP 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.597.pdf
Checklist:
 2025.emnlp-main.597.checklist.pdf