Weiji Li
2024
Learning Human Action Representations from Temporal Context in Lifestyle Vlogs
Oana Ignat
|
Santiago Castro
|
Weiji Li
|
Rada Mihalcea
Proceedings of TextGraphs-17: Graph-based Methods for Natural Language Processing
We address the task of human action representation and show how the approach to generating word representations based on co-occurrence can be adapted to generate human action representations by analyzing their co-occurrence in videos. To this end, we formalize the new task of human action co-occurrence identification in online videos, i.e., determine whether two human actions are likely to co-occur in the same interval of time.We create and make publicly available the Co-Act (Action Co-occurrence) dataset, consisting of a large graph of ~12k co-occurring pairs of visual actions and their corresponding video clips. We describe graph link prediction models that leverage visual and textual information to automatically infer if two actions are co-occurring.We show that graphs are particularly well suited to capture relations between human actions, and the learned graph representations are effective for our task and capture novel and relevant information across different data domains.
2021
WhyAct: Identifying Action Reasons in Lifestyle Vlogs
Oana Ignat
|
Santiago Castro
|
Hanwen Miao
|
Weiji Li
|
Rada Mihalcea
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing
We aim to automatically identify human action reasons in online videos. We focus on the widespread genre of lifestyle vlogs, in which people perform actions while verbally describing them. We introduce and make publicly available the WhyAct dataset, consisting of 1,077 visual actions manually annotated with their reasons. We describe a multimodal model that leverages visual and textual information to automatically infer the reasons corresponding to an action presented in the video.
Search