Abstract
Sign language translation from video to spoken text presents unique challenges owing to the distinct grammar, expression nuances, and high variation of visual appearance across different speakers and contexts. Gloss annotations serve as an intermediary to guide the translation process. In our work, we focus on Gloss2Text translation stage and propose several advances by leveraging pre-trained large language models (LLMs), data augmentation, and novel label-smoothing loss function exploiting gloss translation ambiguities improving significantly the performance of state-of-the-art approaches. Through extensive experiments and ablation studies on the PHOENIX Weather 2014T dataset, our approach surpasses state-of-the-art performance in Gloss2Text translation, indicating its efficacy in addressing sign language translation and suggesting promising avenues for future research and development.- Anthology ID:
- 2024.findings-emnlp.947
- Volume:
- Findings of the Association for Computational Linguistics: EMNLP 2024
- Month:
- November
- Year:
- 2024
- Address:
- Miami, Florida, USA
- Editors:
- Yaser Al-Onaizan, Mohit Bansal, Yun-Nung Chen
- Venue:
- Findings
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 16162–16171
- Language:
- URL:
- https://aclanthology.org/2024.findings-emnlp.947
- DOI:
- 10.18653/v1/2024.findings-emnlp.947
- Cite (ACL):
- Pooya Fayyazsanavi, Antonios Anastasopoulos, and Jana Kosecka. 2024. Gloss2Text: Sign Language Gloss translation using LLMs and Semantically Aware Label Smoothing. In Findings of the Association for Computational Linguistics: EMNLP 2024, pages 16162–16171, Miami, Florida, USA. Association for Computational Linguistics.
- Cite (Informal):
- Gloss2Text: Sign Language Gloss translation using LLMs and Semantically Aware Label Smoothing (Fayyazsanavi et al., Findings 2024)
- PDF:
- https://preview.aclanthology.org/dois-2013-emnlp/2024.findings-emnlp.947.pdf