Abstract
The majority of existing work on sign language recognition encodes signed videos without explicitly acknowledging the phonological attributes of signs. Given that handshape is a vital parameter in sign languages, we explore the potential of handshape-aware sign language recognition. We augment the PHOENIX14T dataset with gloss-level handshape labels, resulting in the new PHOENIX14T-HS dataset. Two unique methods are proposed for handshape-inclusive sign language recognition: a single-encoder network and a dual-encoder network, complemented by a training strategy that simultaneously optimizes both the CTC loss and frame-level cross-entropy loss. The proposed methodology consistently outperforms the baseline performance. The dataset and code can be accessed at: www.anonymous.com.- Anthology ID:
- 2023.findings-emnlp.198
- Volume:
- Findings of the Association for Computational Linguistics: EMNLP 2023
- Month:
- December
- Year:
- 2023
- Address:
- Singapore
- Editors:
- Houda Bouamor, Juan Pino, Kalika Bali
- Venue:
- Findings
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 2993–3002
- Language:
- URL:
- https://aclanthology.org/2023.findings-emnlp.198
- DOI:
- 10.18653/v1/2023.findings-emnlp.198
- Cite (ACL):
- Xuan Zhang and Kevin Duh. 2023. Handshape-Aware Sign Language Recognition: Extended Datasets and Exploration of Handshape-Inclusive Methods. In Findings of the Association for Computational Linguistics: EMNLP 2023, pages 2993–3002, Singapore. Association for Computational Linguistics.
- Cite (Informal):
- Handshape-Aware Sign Language Recognition: Extended Datasets and Exploration of Handshape-Inclusive Methods (Zhang & Duh, Findings 2023)
- PDF:
- https://preview.aclanthology.org/nschneid-patch-2/2023.findings-emnlp.198.pdf