Approaching Sign Language Gloss Translation as a Low-Resource Machine Translation Task

Xuan Zhang, Kevin Duh


Abstract
A cascaded Sign Language Translation system first maps sign videos to gloss annotations and then translates glosses into a spoken languages. This work focuses on the second-stage gloss translation component, which is challenging due to the scarcity of publicly available parallel data. We approach gloss translation as a low-resource machine translation task and investigate two popular methods for improving translation quality: hyperparameter search and backtranslation. We discuss the potentials and pitfalls of these methods based on experiments on the RWTH-PHOENIX-Weather 2014T dataset.
Anthology ID:
2021.mtsummit-at4ssl.7
Volume:
Proceedings of the 1st International Workshop on Automatic Translation for Signed and Spoken Languages (AT4SSL)
Month:
August
Year:
2021
Address:
Virtual
Venue:
MTSummit
SIG:
Publisher:
Association for Machine Translation in the Americas
Note:
Pages:
60–70
Language:
URL:
https://aclanthology.org/2021.mtsummit-at4ssl.7
DOI:
Bibkey:
Cite (ACL):
Xuan Zhang and Kevin Duh. 2021. Approaching Sign Language Gloss Translation as a Low-Resource Machine Translation Task. In Proceedings of the 1st International Workshop on Automatic Translation for Signed and Spoken Languages (AT4SSL), pages 60–70, Virtual. Association for Machine Translation in the Americas.
Cite (Informal):
Approaching Sign Language Gloss Translation as a Low-Resource Machine Translation Task (Zhang & Duh, MTSummit 2021)
Copy Citation:
PDF:
https://preview.aclanthology.org/auto-file-uploads/2021.mtsummit-at4ssl.7.pdf
Data
PHOENIX14T