Sign Language Video Segmentation Using Temporal Boundary Identification

Kavu Maithri Rao, Yasser Hamidullah, Eleftherios Avramidis


Abstract
Sign language segmentation focuses on identifying temporal boundaries within sign language videos. As compared to previous segmentation techniques that have depended on frame-level and phrase-level segmentation, our study emphasizes on subtitle-level segmentation, using synchronized subtitle data to facilitate temporal boundary recognition. Based on Beginning-Inside-Outside (BIO) tagging for subtitle unit delineation, we train a sequence-to-sequence (Seq2Seq) model with and without attention for subtitle boundary identification. Training on optical flow data and aligned subtitles from BOBSL and YouTube-ASL, we show that the Seq2Seq model with attention outperforms baseline models, achieving improved percentage of segments, F1 and IoU score. An additional contribution is the development of an method for subtitle temporal resolution, aiming to facilitate manual annotation.
Anthology ID:
2025.acl-srw.93
Volume:
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 4: Student Research Workshop)
Month:
July
Year:
2025
Address:
Vienna, Austria
Editors:
Jin Zhao, Mingyang Wang, Zhu Liu
Venues:
ACL | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1213–1224
Language:
URL:
https://preview.aclanthology.org/landing_page/2025.acl-srw.93/
DOI:
Bibkey:
Cite (ACL):
Kavu Maithri Rao, Yasser Hamidullah, and Eleftherios Avramidis. 2025. Sign Language Video Segmentation Using Temporal Boundary Identification. In Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 4: Student Research Workshop), pages 1213–1224, Vienna, Austria. Association for Computational Linguistics.
Cite (Informal):
Sign Language Video Segmentation Using Temporal Boundary Identification (Rao et al., ACL 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/landing_page/2025.acl-srw.93.pdf