Video Caption Dataset for Describing Human Actions in Japanese

Yutaro Shigeto, Yuya Yoshikawa, Jiaqing Lin, Akikazu Takeuchi


Abstract
In recent years, automatic video caption generation has attracted considerable attention. This paper focuses on the generation of Japanese captions for describing human actions. While most currently available video caption datasets have been constructed for English, there is no equivalent Japanese dataset. To address this, we constructed a large-scale Japanese video caption dataset consisting of 79,822 videos and 399,233 captions. Each caption in our dataset describes a video in the form of “who does what and where.” To describe human actions, it is important to identify the details of a person, place, and action. Indeed, when we describe human actions, we usually mention the scene, person, and action. In our experiments, we evaluated two caption generation methods to obtain benchmark results. Further, we investigated whether those generation methods could specify “who does what and where.”
Anthology ID:
2020.lrec-1.574
Volume:
Proceedings of the Twelfth Language Resources and Evaluation Conference
Month:
May
Year:
2020
Address:
Marseille, France
Editors:
Nicoletta Calzolari, Frédéric Béchet, Philippe Blache, Khalid Choukri, Christopher Cieri, Thierry Declerck, Sara Goggi, Hitoshi Isahara, Bente Maegaard, Joseph Mariani, Hélène Mazo, Asuncion Moreno, Jan Odijk, Stelios Piperidis
Venue:
LREC
SIG:
Publisher:
European Language Resources Association
Note:
Pages:
4664–4670
Language:
English
URL:
https://aclanthology.org/2020.lrec-1.574
DOI:
Bibkey:
Cite (ACL):
Yutaro Shigeto, Yuya Yoshikawa, Jiaqing Lin, and Akikazu Takeuchi. 2020. Video Caption Dataset for Describing Human Actions in Japanese. In Proceedings of the Twelfth Language Resources and Evaluation Conference, pages 4664–4670, Marseille, France. European Language Resources Association.
Cite (Informal):
Video Caption Dataset for Describing Human Actions in Japanese (Shigeto et al., LREC 2020)
Copy Citation:
PDF:
https://preview.aclanthology.org/nschneid-patch-2/2020.lrec-1.574.pdf
Data
STAIR Actions CaptionsCharadesMSVD