Video-Grounded Dialogues with Pretrained Generation Language Models

Hung Le, Steven C.H. Hoi


Abstract
Pre-trained language models have shown remarkable success in improving various downstream NLP tasks due to their ability to capture dependencies in textual data and generate natural responses. In this paper, we leverage the power of pre-trained language models for improving video-grounded dialogue, which is very challenging and involves complex features of different dynamics: (1) Video features which can extend across both spatial and temporal dimensions; and (2) Dialogue features which involve semantic dependencies over multiple dialogue turns. We propose a framework by extending GPT-2 models to tackle these challenges by formulating video-grounded dialogue tasks as a sequence-to-sequence task, combining both visual and textual representation into a structured sequence, and fine-tuning a large pre-trained GPT-2 network. Our framework allows fine-tuning language models to capture dependencies across multiple modalities over different levels of information: spatio-temporal level in video and token-sentence level in dialogue context. We achieve promising improvement on the Audio-Visual Scene-Aware Dialogues (AVSD) benchmark from DSTC7, which supports a potential direction in this line of research.
Anthology ID:
2020.acl-main.518
Volume:
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics
Month:
July
Year:
2020
Address:
Online
Editors:
Dan Jurafsky, Joyce Chai, Natalie Schluter, Joel Tetreault
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
5842–5848
Language:
URL:
https://aclanthology.org/2020.acl-main.518
DOI:
10.18653/v1/2020.acl-main.518
Bibkey:
Cite (ACL):
Hung Le and Steven C.H. Hoi. 2020. Video-Grounded Dialogues with Pretrained Generation Language Models. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5842–5848, Online. Association for Computational Linguistics.
Cite (Informal):
Video-Grounded Dialogues with Pretrained Generation Language Models (Le & Hoi, ACL 2020)
Copy Citation:
PDF:
https://preview.aclanthology.org/nschneid-patch-2/2020.acl-main.518.pdf
Video:
 http://slideslive.com/38928971