Leqi Shen
2025
Beyond Logits: Aligning Feature Dynamics for Effective Knowledge Distillation
Guoqiang Gong
|
Jiaxing Wang
|
Jin Xu
|
Deping Xiang
|
Zicheng Zhang
|
Leqi Shen
|
Yifeng Zhang
|
JunhuaShu JunhuaShu
|
ZhaolongXing ZhaolongXing
|
Zhen Chen
|
Pengzhang Liu
|
Ke Zhang
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Knowledge distillation (KD) compresses large language models (LLMs), known as teacher models, into lightweight versions called student models, enabling efficient inference and downstream applications. However, prevailing approaches accomplish this by predominantly focusing on matching the final output distributions of student/teacher models. Drawing on the perspective that transformers can be viewed as discretizing ordinary differential equation (ODEs) on integer time steps (corresponding to layer indices), where intermediate features evolve across layers, we argue that effective KD requires aligning the entire feature dynamics between teacher and student models, which we call feature dynamics distillation (FDD). This alignment involves matching both the feature trajectory and its first-order derivative, rather than just the final states. Our approach extends the original KD objective with two additional loss terms: layer-wise feature KD, which matches discretized feature trajectory, and layer feature delta KD, which matches first-order changes in features across adjacent layers. Extensive experiments on various tasks validate the effectiveness of our distillation method.
AdaTP: Attention-Debiased Token Pruning for Video Large Language Models
Fengyuan Sun
|
Leqi Shen
|
Hui Chen
|
Sicheng Zhao
|
Jungong Han
|
Guiguang Ding
Findings of the Association for Computational Linguistics: EMNLP 2025
Video Large Language Models (Video LLMs) have achieved remarkable results in video understanding tasks. However, they often suffer from heavy computational overhead due to the large number of visual tokens generated from multiple video frames. Existing visual token compression methods often rely on attention scores from language models as guidance. However, these scores exhibit inherent biases: global bias reflects a tendency to focus on the two ends of the visual token sequence, while local bias leads to an over-concentration on the same spatial positions across different frames. To address the issue of attention bias, we propose Attention-Debiased Token Pruning for Video Large Language Models(AdaTP), a novel token pruning pipeline for Video LLMs. AdaTP integrates two dedicated debiasing modules into the pipeline, targeting global attention bias and local attention bias, respectively. Without the need for additional training, our method significantly reduces the computational overhead of Video LLMs while retaining the performance of vanilla models. Extensive evaluation shows that AdaTP achieves state-of-the-art performance in various commonly used video understanding benchmarks. In particular, on LLaVA-OneVision-7B, AdaTP maintains performance without degradation while using only up to 27.3% FLOPs compared to the vanilla model. Our code will be released soon.
Search
Fix author
Co-authors
- Zhen Chen 1
- Hui Chen 1
- Guiguang Ding 1
- Guoqiang Gong 1
- Jungong Han 1
- show all...