Zian Zeng
2025
The American Sign Language Knowledge Graph: Infusing ASL Models with Linguistic Knowledge
Lee Kezar
|
Nidhi Munikote
|
Zian Zeng
|
Zed Sehyr
|
Naomi Caselli
|
Jesse Thomason
Findings of the Association for Computational Linguistics: NAACL 2025
Sign language models could make modern language technologies more accessible to those who sign, but the supply of accurately labeled data struggles to meet the demand associated with training large, end-to-end neural models. As an alternative to this approach, we explore how knowledge about the linguistic structure of signs may be used as inductive priors for learning sign recognition and comprehension tasks. We first construct the American Sign Language Knowledge Graph (ASLKG) from 11 sources of linguistic knowledge, with emphasis on features related to signs’ phonological and lexical-semantic properties. Then, we use the ASLKG to train neuro-symbolic models on ASL video input tasks, achieving accuracies of 91% for isolated sign recognition, 14% for predicting the semantic features of unseen signs, and 36% for classifying the topic of Youtube-ASL videos.