Bridge Video and Text with Cascade Syntactic Structure

Guolong Wang, Zheng Qin, Kaiping Xu, Kai Huang, Shuxiong Ye


Abstract
We present a video captioning approach that encodes features by progressively completing syntactic structure (LSTM-CSS). To construct basic syntactic structure (i.e., subject, predicate, and object), we use a Conditional Random Field to label semantic representations (i.e., motions, objects). We argue that in order to improve the comprehensiveness of the description, the local features within object regions can be used to generate complementary syntactic elements (e.g., attribute, adverbial). Inspired by redundancy of human receptors, we utilize a Region Proposal Network to focus on the object regions. To model the final temporal dynamics, Recurrent Neural Network with Path Embeddings is adopted. We demonstrate the effectiveness of LSTM-CSS on generating natural sentences: 42.3% and 28.5% in terms of BLEU@4 and METEOR. Superior performance when compared to state-of-the-art methods are reported on a large video description dataset (i.e., MSR-VTT-2016).
Anthology ID:
C18-1303
Volume:
Proceedings of the 27th International Conference on Computational Linguistics
Month:
August
Year:
2018
Address:
Santa Fe, New Mexico, USA
Editors:
Emily M. Bender, Leon Derczynski, Pierre Isabelle
Venue:
COLING
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
3576–3585
Language:
URL:
https://aclanthology.org/C18-1303
DOI:
Bibkey:
Cite (ACL):
Guolong Wang, Zheng Qin, Kaiping Xu, Kai Huang, and Shuxiong Ye. 2018. Bridge Video and Text with Cascade Syntactic Structure. In Proceedings of the 27th International Conference on Computational Linguistics, pages 3576–3585, Santa Fe, New Mexico, USA. Association for Computational Linguistics.
Cite (Informal):
Bridge Video and Text with Cascade Syntactic Structure (Wang et al., COLING 2018)
Copy Citation:
PDF:
https://preview.aclanthology.org/naacl-24-ws-corrections/C18-1303.pdf
Data
MS COCOMSR-VTT