Abstract
Recent years have witnessed an increasing interest in image-based question-answering (QA) tasks. However, due to data limitations, there has been much less work on video-based QA. In this paper, we present TVQA, a large-scale video QA dataset based on 6 popular TV shows. TVQA consists of 152,545 QA pairs from 21,793 clips, spanning over 460 hours of video. Questions are designed to be compositional in nature, requiring systems to jointly localize relevant moments within a clip, comprehend subtitle-based dialogue, and recognize relevant visual concepts. We provide analyses of this new dataset as well as several baselines and a multi-stream end-to-end trainable neural network framework for the TVQA task. The dataset is publicly available at http://tvqa.cs.unc.edu.- Anthology ID:
- D18-1167
- Volume:
- Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing
- Month:
- October-November
- Year:
- 2018
- Address:
- Brussels, Belgium
- Editors:
- Ellen Riloff, David Chiang, Julia Hockenmaier, Jun’ichi Tsujii
- Venue:
- EMNLP
- SIG:
- SIGDAT
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 1369–1379
- Language:
- URL:
- https://aclanthology.org/D18-1167
- DOI:
- 10.18653/v1/D18-1167
- Cite (ACL):
- Jie Lei, Licheng Yu, Mohit Bansal, and Tamara Berg. 2018. TVQA: Localized, Compositional Video Question Answering. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1369–1379, Brussels, Belgium. Association for Computational Linguistics.
- Cite (Informal):
- TVQA: Localized, Compositional Video Question Answering (Lei et al., EMNLP 2018)
- PDF:
- https://preview.aclanthology.org/naacl24-info/D18-1167.pdf
- Code
- additional community code
- Data
- TVQA, CLEVR, COCO-QA, ImageNet, LSMDC, MCTest, MovieFIB, MovieQA, SUTD-TrafficQA, Visual Madlibs, Visual Question Answering, Visual7W