@inproceedings{zhang-duh-2023-handshape,
    title = "Handshape-Aware Sign Language Recognition: Extended Datasets and Exploration of Handshape-Inclusive Methods",
    author = "Zhang, Xuan  and
      Duh, Kevin",
    editor = "Bouamor, Houda  and
      Pino, Juan  and
      Bali, Kalika",
    booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
    month = dec,
    year = "2023",
    address = "Singapore",
    publisher = "Association for Computational Linguistics",
    url = "https://preview.aclanthology.org/ingest-emnlp/2023.findings-emnlp.198/",
    doi = "10.18653/v1/2023.findings-emnlp.198",
    pages = "2993--3002",
    abstract = "The majority of existing work on sign language recognition encodes signed videos without explicitly acknowledging the phonological attributes of signs. Given that handshape is a vital parameter in sign languages, we explore the potential of handshape-aware sign language recognition. We augment the PHOENIX14T dataset with gloss-level handshape labels, resulting in the new PHOENIX14T-HS dataset. Two unique methods are proposed for handshape-inclusive sign language recognition: a single-encoder network and a dual-encoder network, complemented by a training strategy that simultaneously optimizes both the CTC loss and frame-level cross-entropy loss. The proposed methodology consistently outperforms the baseline performance. The dataset and code can be accessed at: www.anonymous.com."
}Markdown (Informal)
[Handshape-Aware Sign Language Recognition: Extended Datasets and Exploration of Handshape-Inclusive Methods](https://preview.aclanthology.org/ingest-emnlp/2023.findings-emnlp.198/) (Zhang & Duh, Findings 2023)
ACL