End-to-end Dense Video Captioning as Sequence Generation

Wanrong Zhu, Bo Pang, Ashish V. Thapliyal, William Yang Wang, Radu Soricut


Abstract
Dense video captioning aims to identify the events of interest in an input video, and generate descriptive captions for each event. Previous approaches usually follow a two-stage generative process, which first proposes a segment for each event, then renders a caption for each identified segment. Recent advances in large-scale sequence generation pretraining have seen great success in unifying task formulation for a great variety of tasks, but so far, more complex tasks such as dense video captioning are not able to fully utilize this powerful paradigm. In this work, we show how to model the two subtasks of dense video captioning jointly as one sequence generation task, and simultaneously predict the events and the corresponding descriptions. Experiments on YouCook2 and ViTT show encouraging results and indicate the feasibility of training complex tasks such as end-to-end dense video captioning integrated into large-scale pretrained models.
Anthology ID:
2022.coling-1.498
Volume:
Proceedings of the 29th International Conference on Computational Linguistics
Month:
October
Year:
2022
Address:
Gyeongju, Republic of Korea
Editors:
Nicoletta Calzolari, Chu-Ren Huang, Hansaem Kim, James Pustejovsky, Leo Wanner, Key-Sun Choi, Pum-Mo Ryu, Hsin-Hsi Chen, Lucia Donatelli, Heng Ji, Sadao Kurohashi, Patrizia Paggio, Nianwen Xue, Seokhwan Kim, Younggyun Hahm, Zhong He, Tony Kyungil Lee, Enrico Santus, Francis Bond, Seung-Hoon Na
Venue:
COLING
SIG:
Publisher:
International Committee on Computational Linguistics
Note:
Pages:
5651–5665
Language:
URL:
https://aclanthology.org/2022.coling-1.498
DOI:
Bibkey:
Cite (ACL):
Wanrong Zhu, Bo Pang, Ashish V. Thapliyal, William Yang Wang, and Radu Soricut. 2022. End-to-end Dense Video Captioning as Sequence Generation. In Proceedings of the 29th International Conference on Computational Linguistics, pages 5651–5665, Gyeongju, Republic of Korea. International Committee on Computational Linguistics.
Cite (Informal):
End-to-end Dense Video Captioning as Sequence Generation (Zhu et al., COLING 2022)
Copy Citation:
PDF:
https://preview.aclanthology.org/nschneid-patch-2/2022.coling-1.498.pdf
Data
ViTTWikiHowYouCook2