Language-Guided Temporal Token Pruning for Efficient VideoLLM Processing

Yogesh Kumar


Abstract
Vision Language Models (VLMs) struggle with long-form videos due to the quadratic complexity of attention mechanisms. We propose Language-Guided Temporal Token Pruning (LGTTP), which leverages temporal cues from queries to adaptively prune video tokens, preserving contextual continuity while reducing computational overhead. Unlike uniform pruning or keyframe selection, LGTTP retains higher token density in temporally relevant segments. Our model-agnostic framework integrates with TimeChat and LLaVA-Video, achieving a 65% reduction in computation while preserving 97-99% of the original performance. On QVHighlights, LGTTP improves HIT@1 by +9.5%, and on Charades-STA, it retains 99.6% of R@1. It excels on queries with explicit temporal markers and remains effective across general video understanding tasks.
Anthology ID:
2025.emnlp-main.451
Volume:
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2025
Address:
Suzhou, China
Editors:
Christos Christodoulopoulos, Tanmoy Chakraborty, Carolyn Rose, Violet Peng
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
8935–8942
Language:
URL:
https://preview.aclanthology.org/author-page-yu-wang-polytechnic/2025.emnlp-main.451/
DOI:
10.18653/v1/2025.emnlp-main.451
Bibkey:
Cite (ACL):
Yogesh Kumar. 2025. Language-Guided Temporal Token Pruning for Efficient VideoLLM Processing. In Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing, pages 8935–8942, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
Language-Guided Temporal Token Pruning for Efficient VideoLLM Processing (Kumar, EMNLP 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/author-page-yu-wang-polytechnic/2025.emnlp-main.451.pdf
Checklist:
 2025.emnlp-main.451.checklist.pdf