ViPE: Visual Perception in Parameter Space for Efficient Video-Language Understanding

Shichen Lu, Tongtian Yue, Longteng Guo, Handong Li, Xingjian He, Si Liu, Jing Liu


Abstract
Existing video-language models (Video-LLMs) typically rely on concatenating visual tokens with textual inputs for joint modeling. However, this token-level alignment leads to significant inefficiency, especially when scaling to long videos with dense visual inputs. In this work, we propose a video-to-parameter efficiency paradigm named ViPE that eliminates redundant visual tokens by transforming video content into visual perceptual weights, which are directly injected into the LLM’s parameters. ViPE consists of a visual injection module that compresses video features into a small set of perceptual queries using a hierarchical merge strategy, and a visual perception module that integrates the resulting representations into the LLM through a lightweight LoRA-like mechanism. ViPE achieves performance comparable to token-based baselines such as LLaVA, while reducing FLOPs by 85% and inference time by up to 65%, demonstrating a highly efficient and scalable solution for video understanding.
Anthology ID:
2025.emnlp-main.897
Volume:
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2025
Address:
Suzhou, China
Editors:
Christos Christodoulopoulos, Tanmoy Chakraborty, Carolyn Rose, Violet Peng
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
17775–17786
Language:
URL:
https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.897/
DOI:
Bibkey:
Cite (ACL):
Shichen Lu, Tongtian Yue, Longteng Guo, Handong Li, Xingjian He, Si Liu, and Jing Liu. 2025. ViPE: Visual Perception in Parameter Space for Efficient Video-Language Understanding. In Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing, pages 17775–17786, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
ViPE: Visual Perception in Parameter Space for Efficient Video-Language Understanding (Lu et al., EMNLP 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.897.pdf
Checklist:
 2025.emnlp-main.897.checklist.pdf