Conor Lynch


2025

pdf bib
Rethinking Sign Language Translation: The Impact of Signer Dependence on Model Evaluation
Keren Artiaga | Sabyasachi Kamila | Haithem Afli | Conor Lynch | Mohammed Hasanuzzaman
Findings of the Association for Computational Linguistics: EMNLP 2025

Sign Language Translation has advanced with deep learning, yet evaluations remain largely signer-dependent, with overlapping signers across train/dev/test. This raises concerns about whether models truly generalise or instead rely on signer-specific regularities. We conduct signer-fold cross-validation on GFSLT-VLP, GASLT, and SignCL—three leading, publicly available, gloss-free SLT models—on CSL-Daily and PHOENIX14T. Under signer-independent evaluation, performance drops sharply: on PHOENIX14T, GFSLT-VLP falls from BLEU-4 21.44 to 3.59 and ROUGE-L 42.49 to 11.89; GASLT from 15.74 to 8.26; and SignCL from 22.74 to 3.66. We also observe that in CSL-Daily many target sentences are performed by multiple signers, so common splits can place identical sentences in both training and test, inflating absolute scores by rewarding recall of recurring sentences rather than genuine generalisation. These findings indicate that signer-dependent evaluation can substantially overestimate SLT capability. We recommend: (1) adopting signer-independent protocols to ensure generalisation to unseen signers; (2) restructuring datasets to include explicit signer-independent, sentence-disjoint splits for consistent benchmarking; and (3) reporting both signer-dependent and signer-independent results together with train–test sentence overlap to improve transparency and comparability.