Abstract
In recent years, the pattern of news consumption has been changing. The most popular multimedia news formats are now multimodal - the reader is often presented not only with a textual article but also with a short, vivid video. To draw the attention of the reader, such video-based articles are usually presented as a short textual summary paired with an image thumbnail. In this paper, we introduce MLASK (MultimodaL Article Summarization Kit) - a new dataset of video-based news articles paired with a textual summary and a cover picture, all obtained by automatically crawling several news websites. We demonstrate how the proposed dataset can be used to model the task of multimodal summarization by training a Transformer-based neural model. We also examine the effects of pre-training when the usage of generative pre-trained language models helps to improve the model performance, but (additional) pre-training on the simpler task of text summarization yields even better results. Our experiments suggest that the benefits of pre-training and using additional modalities in the input are not orthogonal.- Anthology ID:
- 2023.findings-eacl.67
- Volume:
- Findings of the Association for Computational Linguistics: EACL 2023
- Month:
- May
- Year:
- 2023
- Address:
- Dubrovnik, Croatia
- Editors:
- Andreas Vlachos, Isabelle Augenstein
- Venue:
- Findings
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 910–924
- Language:
- URL:
- https://aclanthology.org/2023.findings-eacl.67
- DOI:
- 10.18653/v1/2023.findings-eacl.67
- Cite (ACL):
- Mateusz Krubiński and Pavel Pecina. 2023. MLASK: Multimodal Summarization of Video-based News Articles. In Findings of the Association for Computational Linguistics: EACL 2023, pages 910–924, Dubrovnik, Croatia. Association for Computational Linguistics.
- Cite (Informal):
- MLASK: Multimodal Summarization of Video-based News Articles (Krubiński & Pecina, Findings 2023)
- PDF:
- https://preview.aclanthology.org/add_acl24_videos/2023.findings-eacl.67.pdf