Zhen Xing


2025

pdf bib
ProLongVid: A Simple but Strong Baseline for Long-context Video Instruction Tuning
Rui Wang | Bohao Li | Xiyang Dai | Jianwei Yang | Yi-Ling Chen | Zhen Xing | Yifan Yang | Dongdong Chen | Xipeng Qiu | Zuxuan Wu | Yu-Gang Jiang
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing

Video understanding is essential for multimodal large language models (MLLMs) to interact effectively with users and the real world. However, analyzing long videos remains a major challenge due to the lack of high-quality video instruction data and effective training strategies. In this paper, we introduce a simple yet effective baseline for long-context video understanding, including dataset construction and training recipes. We curate a large-scale video instruction dataset with over 1M samples, encompassing videos from a few seconds to several minutes across diverse sources, without any human annotations. Additionally, we propose a progressive video instruction tuning strategy that incrementally increases input context length, enabling better utilization of videos of varying durations. Comprehensive experiments demonstrate that our dataset significantly outperforms existing video instruction datasets for fine-tuning MLLMs. Furthermore, our training approach establishes a strong video MLLM baseline, surpassing previous open-source models on video benchmarks and outperforming proprietary models like GPT-4V and GPT-4o-mini on VideoMME, even with a compact 7B model.

2023

pdf bib
TranSFormer: Slow-Fast Transformer for Machine Translation
Bei Li | Yi Jing | Xu Tan | Zhen Xing | Tong Xiao | Jingbo Zhu
Findings of the Association for Computational Linguistics: ACL 2023

Learning multiscale Transformer models has been evidenced as a viable approach to augmenting machine translation systems. Prior research has primarily focused on treating subwords as basic units in developing such systems. However, the incorporation of fine-grained character-level features into multiscale Transformer has not yet been explored. In this work, we present a Slow-Fast two-stream learning model, referred to as TranSFormer, which utilizes a “slow” branch to deal with subword sequences and a “fast” branch to deal with longer character sequences. This model is efficient since the fast branch is very lightweight by reducing the model width, and yet provides useful fine-grained features for the slow branch. Our TranSFormer shows consistent BLEU improvements (larger than 1 BLEU point) on several machine translation benchmarks.