Text-Derived Knowledge Helps Vision: A Simple Cross-modal Distillation for Video-based Action Anticipation
Sayontan Ghosh, Tanvi Aggarwal, Minh Hoai, Niranjan Balasubramanian
Abstract
Anticipating future actions in a video is useful for many autonomous and assistive technologies. Prior action anticipation work mostly treat this as a vision modality problem, where the models learn the task information primarily from the video features in the action anticipation datasets. However, knowledge about action sequences can also be obtained from external textual data. In this work, we show how knowledge in pretrained language models can be adapted and distilled into vision based action anticipation models. We show that a simple distillation technique can achieve effective knowledge transfer and provide consistent gains on a strong vision model (Anticipative Vision Transformer) for two action anticipation datasets (3.5% relative gain on EGTEA-GAZE+ and 7.2% relative gain on EPIC-KITCHEN 55), giving a new state-of-the-art result.- Anthology ID:
- 2023.findings-eacl.141
- Volume:
- Findings of the Association for Computational Linguistics: EACL 2023
- Month:
- May
- Year:
- 2023
- Address:
- Dubrovnik, Croatia
- Editors:
- Andreas Vlachos, Isabelle Augenstein
- Venue:
- Findings
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 1882–1897
- Language:
- URL:
- https://aclanthology.org/2023.findings-eacl.141
- DOI:
- 10.18653/v1/2023.findings-eacl.141
- Cite (ACL):
- Sayontan Ghosh, Tanvi Aggarwal, Minh Hoai, and Niranjan Balasubramanian. 2023. Text-Derived Knowledge Helps Vision: A Simple Cross-modal Distillation for Video-based Action Anticipation. In Findings of the Association for Computational Linguistics: EACL 2023, pages 1882–1897, Dubrovnik, Croatia. Association for Computational Linguistics.
- Cite (Informal):
- Text-Derived Knowledge Helps Vision: A Simple Cross-modal Distillation for Video-based Action Anticipation (Ghosh et al., Findings 2023)
- PDF:
- https://preview.aclanthology.org/emnlp-22-attachments/2023.findings-eacl.141.pdf