Dongdong Chen


2025

pdf bib
ProLongVid: A Simple but Strong Baseline for Long-context Video Instruction Tuning
Rui Wang | Bohao Li | Xiyang Dai | Jianwei Yang | Yi-Ling Chen | Zhen Xing | Yifan Yang | Dongdong Chen | Xipeng Qiu | Zuxuan Wu | Yu-Gang Jiang
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing

Video understanding is essential for multimodal large language models (MLLMs) to interact effectively with users and the real world. However, analyzing long videos remains a major challenge due to the lack of high-quality video instruction data and effective training strategies. In this paper, we introduce a simple yet effective baseline for long-context video understanding, including dataset construction and training recipes. We curate a large-scale video instruction dataset with over 1M samples, encompassing videos from a few seconds to several minutes across diverse sources, without any human annotations. Additionally, we propose a progressive video instruction tuning strategy that incrementally increases input context length, enabling better utilization of videos of varying durations. Comprehensive experiments demonstrate that our dataset significantly outperforms existing video instruction datasets for fine-tuning MLLMs. Furthermore, our training approach establishes a strong video MLLM baseline, surpassing previous open-source models on video benchmarks and outperforming proprietary models like GPT-4V and GPT-4o-mini on VideoMME, even with a compact 7B model.

2024

pdf bib
i-Code V2: An Autoregressive Generation Framework over Vision, Language, and Speech Data
Ziyi Yang | Mahmoud Khademi | Yichong Xu | Reid Pryzant | Yuwei Fang | Chenguang Zhu | Dongdong Chen | Yao Qian | Xuemei Gao | Yi-Ling Chen | Robert Gmyr | Naoyuki Kanda | Noel Codella | Bin Xiao | Yu Shi | Lu Yuan | Takuya Yoshioka | Michael Zeng | Xuedong Huang
Findings of the Association for Computational Linguistics: NAACL 2024

The convergence of text, visual, and audio data is crucial towards human-like artificial intelligence, however the current Vision-Language-Speech landscape is dominated by encoder-only models that lack generative abilities. We propose closing this gap with i-Code V2, one of the first models capable of generating natural language from any combination of Vision, Language, and Speech data. i-Code V2 leverages state-of-the-art single-modality encoders, combining their outputs with a new modality-fusing encoder to project combinations of modalities into a shared representational space. Language tokens are generated from these representations via an autoregressive decoder. i-Code V2 is pretrained end-to-end on a large collection of dual- and single-modality datasets with a novel text completion objective that can be generalized across arbitrary combinations of modalities. i-Code V2 matches or outperforms state-of-the-art single- and dual-modality baselines on 7 multimodal tasks, demonstrating the power of generative multimodal pretraining across a diversity of tasks and signals.

1998

pdf bib
A Test Environment for Natural Language Understanding Systems
Li Li | Deborah A. Dahl | Lewis M. Norton | Marcia C. Linebarger | Dongdong Chen
COLING 1998 Volume 2: The 17th International Conference on Computational Linguistics

pdf bib
A Test Environment for Natural Language Understanding Systems
Li Li | Deborah A. Dahl | Lewis M. Norton | Marcia C. Linebarger | Dongdong Chen
36th Annual Meeting of the Association for Computational Linguistics and 17th International Conference on Computational Linguistics, Volume 2