Kai Cao


2025

pdf bib
X-LeBench: A Benchmark for Extremely Long Egocentric Video Understanding
Wenqi Zhou | Kai Cao | Hao Zheng | Yunze Liu | Xinyi Zheng | Miao Liu | Per Ola Kristensson | Walterio W. Mayol-Cuevas | Fan Zhang | Weizhe Lin | Junxiao Shen
Findings of the Association for Computational Linguistics: EMNLP 2025

Long-form egocentric video understanding provides rich contextual information and unique insights into long-term human behaviors, holding significant potential for applications in embodied intelligence, long-term activity analysis, and personalized assistive technologies. However, existing benchmark datasets primarily focus on single, short (e.g., minutes to tens of minutes) to moderately long videos, leaving a substantial gap in evaluating extensive, ultra-long egocentric video recordings. To address this, we introduce X-LeBench, a novel benchmark dataset meticulously designed to fill this gap by focusing on tasks requiring a comprehensive understanding of extremely long egocentric video recordings. Our X-LeBench develops a life-logging simulation pipeline that produces realistic, coherent daily plans aligned with real-world video data. This approach enables the flexible integration of synthetic daily plans with real-world footage from Ego4D—a massive-scale egocentric video dataset covers a wide range of daily life scenarios—resulting in 432 simulated video life logs spanning from 23 minutes to 16.4 hours. The evaluations of several baseline systems and multimodal large language models (MLLMs) reveal their poor performance across the board, highlighting the inherent challenges of long-form egocentric video understanding, such as temporal localization and reasoning, context aggregation, and memory retention, and underscoring the need for more advanced models.

2018

pdf bib
Sound Signal Processing with Seq2Tree Network
Weicheng Ma | Kai Cao | Zhaoheng Ni | Peter Chin | Xiang Li
Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)

2015

pdf bib
Improving Event Detection with Active Learning
Kai Cao | Xiang Li | Miao Fan | Ralph Grishman
Proceedings of the International Conference Recent Advances in Natural Language Processing

pdf bib
Improving Event Detection with Dependency Regularization
Kai Cao | Xiang Li | Ralph Grishman
Proceedings of the International Conference Recent Advances in Natural Language Processing

pdf bib
Jointly Embedding Relations and Mentions for Knowledge Population
Miao Fan | Kai Cao | Yifan He | Ralph Grishman
Proceedings of the International Conference Recent Advances in Natural Language Processing

pdf bib
Improving Event Detection with Abstract Meaning Representation
Xiang Li | Thien Huu Nguyen | Kai Cao | Ralph Grishman
Proceedings of the First Workshop on Computing News Storylines