Joint Multimedia Event Extraction from Video and Article

Brian Chen, Xudong Lin, Christopher Thomas, Manling Li, Shoya Yoshida, Lovish Chum, Heng Ji, Shih-Fu Chang


Abstract
Visual and textual modalities contribute complementary information about events described in multimedia documents. Videos contain rich dynamics and detailed unfoldings of events, while text describes more high-level and abstract concepts. However, existing event extraction methods either do not handle video or solely target video while ignoring other modalities. In contrast, we propose the first approach to jointly extract events from both video and text articles. We introduce the new task of Video MultiMedia Event Extraction and propose two novel components to build the first system towards this task. First, we propose the first self-supervised cross-modal event coreference model that can determine coreference between video events and text events without any manually annotated pairs. Second, we introduce the first cross-modal transformer architecture, which extracts structured event information from both videos and text documents. We also construct and will publicly release a new benchmark of video-article pairs, consisting of 860 video-article pairs with extensive annotations for evaluating methods on this task. Our experimental results demonstrate the effectiveness of our proposed method on our new benchmark dataset. We achieve 6.0% and 5.8% absolute F-score gain on multimodal event coreference resolution and multimedia event extraction.
Anthology ID:
2021.findings-emnlp.8
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2021
Month:
November
Year:
2021
Address:
Punta Cana, Dominican Republic
Editors:
Marie-Francine Moens, Xuanjing Huang, Lucia Specia, Scott Wen-tau Yih
Venue:
Findings
SIG:
SIGDAT
Publisher:
Association for Computational Linguistics
Note:
Pages:
74–88
Language:
URL:
https://aclanthology.org/2021.findings-emnlp.8
DOI:
10.18653/v1/2021.findings-emnlp.8
Bibkey:
Cite (ACL):
Brian Chen, Xudong Lin, Christopher Thomas, Manling Li, Shoya Yoshida, Lovish Chum, Heng Ji, and Shih-Fu Chang. 2021. Joint Multimedia Event Extraction from Video and Article. In Findings of the Association for Computational Linguistics: EMNLP 2021, pages 74–88, Punta Cana, Dominican Republic. Association for Computational Linguistics.
Cite (Informal):
Joint Multimedia Event Extraction from Video and Article (Chen et al., Findings 2021)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-2024-clasp/2021.findings-emnlp.8.pdf
Software:
 2021.findings-emnlp.8.Software.zip
Video:
 https://preview.aclanthology.org/ingest-2024-clasp/2021.findings-emnlp.8.mp4
Data
HowTo100MM2E2Visual Genome