Video-Skill-CoT: Skill-based Chain-of-Thoughts for Domain-Adaptive Video Reasoning

Daeun Lee, Jaehong Yoon, Jaemin Cho, Mohit Bansal


Abstract
Recent advances in chain-of-thought (CoT) reasoning have improved complex video understanding, but existing methods often struggle to adapt to domain-specific skills (e.g., temporal grounding, event detection, spatial relations) over various video content. To address this, we propose Video-Skill-CoT (aka Video-SKoT) a framework that automatically constructs and leverages skill-aware CoT supervisions for domain-adaptive video reasoning. First, we construct skill-based CoT annotations: We extract domain-relevant reasoning skills from training questions, cluster them into a shared skill taxonomy, and create detailed multi-step CoT rationale tailored to each video question pair for training. Second, we introduce a skill-specific expert learning framework. Each expert module specializes in a subset of reasoning skills and is trained with lightweight adapters using the collected CoT supervision. We demonstrate the effectiveness of the proposed approach on three video understanding benchmarks, where Video-SKoT consistently outperforms strong baselines. We also provide in-depth analyses on comparing different CoT annotation pipelines and learned skills over multiple video domains.
Anthology ID:
2025.findings-emnlp.1001
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2025
Month:
November
Year:
2025
Address:
Suzhou, China
Editors:
Christos Christodoulopoulos, Tanmoy Chakraborty, Carolyn Rose, Violet Peng
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
18435–18449
Language:
URL:
https://preview.aclanthology.org/author-page-yu-wang-polytechnic/2025.findings-emnlp.1001/
DOI:
10.18653/v1/2025.findings-emnlp.1001
Bibkey:
Cite (ACL):
Daeun Lee, Jaehong Yoon, Jaemin Cho, and Mohit Bansal. 2025. Video-Skill-CoT: Skill-based Chain-of-Thoughts for Domain-Adaptive Video Reasoning. In Findings of the Association for Computational Linguistics: EMNLP 2025, pages 18435–18449, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
Video-Skill-CoT: Skill-based Chain-of-Thoughts for Domain-Adaptive Video Reasoning (Lee et al., Findings 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/author-page-yu-wang-polytechnic/2025.findings-emnlp.1001.pdf
Checklist:
 2025.findings-emnlp.1001.checklist.pdf