Huaibo Huang
2024
DeVAn: Dense Video Annotation for Video-Language Models
Tingkai Liu
|
Yunzhe Tao
|
Haogeng Liu
|
Qihang Fang
|
Ding Zhou
|
Huaibo Huang
|
Ran He
|
Hongxia Yang
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
We present a novel human annotated dataset for evaluating the ability for visual-language models to generate both short and long descriptions for real-world video clips, termed DeVAn (Dense Video Annotation). The dataset contains 8.5K YouTube video clips of 20-60 seconds in duration and covers a wide range of topics and interests. Each video clip is independently annotated by 5 human annotators, producing both captions (1 sentence) and summaries (3-10 sentences). Given any video selected from the dataset and its corresponding ASR information, we evaluate visual-language models on either caption or summary generation that is grounded in both the visual and auditory content of the video. Additionally, models are also evaluated on caption- and summary-based retrieval tasks, where the summary-based retrieval task requires the identification of a target video given excerpts of a given summary. Given the novel nature of the paragraph-length video summarization task, we compared different existing evaluation metrics and their alignment with human preferences and found that model-based evaluation metrics provide more semantically-oriented and human-aligned evaluation. Finally, we benchmarked a wide range of current video-language models on DeVAn, and we aim for DeVAn to serve as a useful evaluation set in the age of large language models and complex multi-modal tasks. Code is available at https://github.com/TK-21st/DeVAn.
Search
Co-authors
- Tingkai Liu 1
- Yunzhe Tao 1
- Haogeng Liu 1
- Qihang Fang 1
- Ding Zhou 1
- show all...
Venues
- acl1