Grafting Pre-trained Models for Multimodal Headline Generation
Lingfeng Qiao, Chen Wu, Ye Liu, Haoyuan Peng, Di Yin, Bo Ren
Abstract
Multimodal headline utilizes both video frames and transcripts to generate the natural language title of the videos. Due to a lack of large-scale, manually annotated data, the task of annotating grounded headlines for video is labor intensive and impractical. Previous researches on pre-trained language models and video-language models have achieved significant progress in related downstream tasks. However, none of them can be directly applied to multimodal headline architecture where we need both multimodal encoder and sentence decoder. A major challenge in simply gluing language model and video-language model is the modality balance, which is aimed at combining visual-language complementary abilities. In this paper, we propose a novel approach to graft the video encoder from the pre-trained video-language model on the generative pre-trained language model. We also present a consensus fusion mechanism for the integration of different components, via inter/intra modality relation. Empirically, experiments show that the grafted model achieves strong results on a brand-new dataset collected from real-world applications.- Anthology ID:
- 2022.emnlp-industry.25
- Volume:
- Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing: Industry Track
- Month:
- December
- Year:
- 2022
- Address:
- Abu Dhabi, UAE
- Editors:
- Yunyao Li, Angeliki Lazaridou
- Venue:
- EMNLP
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 244–253
- Language:
- URL:
- https://aclanthology.org/2022.emnlp-industry.25
- DOI:
- 10.18653/v1/2022.emnlp-industry.25
- Cite (ACL):
- Lingfeng Qiao, Chen Wu, Ye Liu, Haoyuan Peng, Di Yin, and Bo Ren. 2022. Grafting Pre-trained Models for Multimodal Headline Generation. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing: Industry Track, pages 244–253, Abu Dhabi, UAE. Association for Computational Linguistics.
- Cite (Informal):
- Grafting Pre-trained Models for Multimodal Headline Generation (Qiao et al., EMNLP 2022)
- PDF:
- https://preview.aclanthology.org/nschneid-patch-1/2022.emnlp-industry.25.pdf