Huijie Yao


2024

pdf
Semi-Supervised Spoken Language Glossification
Huijie Yao | Wengang Zhou | Hao Zhou | Houqiang Li
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

Spoken language glossification (SLG) aims to translate the spoken language text into the sign language gloss, i.e., a written record of sign language. In this work, we present a framework named Semi-Supervised Spoken Language Glossification (S3LG) for SLG. To tackle the bottleneck of limited parallel data in SLG, our S3LG incorporates large-scale monolingual spoken language text into SLG training. The proposed framework follows the self-training structure that iteratively annotates and learns from pseudo labels. Considering the lexical similarity and syntactic difference between sign language and spoken language, our S3LG adopts both the rule-based heuristic and model-based approach for auto-annotation. During training, we randomly mix these complementary synthetic datasets and mark their differences with a special token. As the synthetic data may be less quality, the S3LG further leverages consistency regularization to reduce the negative impact of noise in the synthetic data. Extensive experiments are conducted on public benchmarks to demonstrate the effectiveness of the S3LG. Our code is available at https://github.com/yaohj11/S3LG.