2022
pdf
abs
Findings of the First WMT Shared Task on Sign Language Translation (WMT-SLT22)
Mathias Müller
|
Sarah Ebling
|
Eleftherios Avramidis
|
Alessia Battisti
|
Michèle Berger
|
Richard Bowden
|
Annelies Braffort
|
Necati Cihan Camgöz
|
Cristina España-bonet
|
Roman Grundkiewicz
|
Zifan Jiang
|
Oscar Koller
|
Amit Moryossef
|
Regula Perrollaz
|
Sabine Reinhard
|
Annette Rios
|
Dimitar Shterionov
|
Sandra Sidler-miserez
|
Katja Tissi
Proceedings of the Seventh Conference on Machine Translation (WMT)
This paper presents the results of the First WMT Shared Task on Sign Language Translation (WMT-SLT22).This shared task is concerned with automatic translation between signed and spoken languages. The task is novel in the sense that it requires processing visual information (such as video frames or human pose estimation) beyond the well-known paradigm of text-to-text machine translation (MT).The task featured two tracks, translating from Swiss German Sign Language (DSGS) to German and vice versa. Seven teams participated in this first edition of the task, all submitting to the DSGS-to-German track.Besides a system ranking and system papers describing state-of-the-art techniques, this shared task makes the following scientific contributions: novel corpora, reproducible baseline systems and new protocols and software for human evaluation. Finally, the task also resulted in the first publicly available set of system outputs and human evaluation scores for sign language translation.
pdf
abs
Clean Text and Full-Body Transformer: Microsoft’s Submission to the WMT22 Shared Task on Sign Language Translation
Subhadeep Dey
|
Abhilash Pal
|
Cyrine Chaabani
|
Oscar Koller
Proceedings of the Seventh Conference on Machine Translation (WMT)
This paper describes Microsoft’s submission to the first shared task on sign language translation at WMT 2022, a public competition tackling sign language to spoken language translation for Swiss German sign language. The task is very challenging due to data scarcity and an unprecedented vocabulary size of more than 20k words on the target side. Moreover, the data is taken from real broadcast news, includes native signing and covers scenarios of long videos. Motivated by recent advances in action recognition, we incorporate full body information by extracting features from a pre-trained I3D model and applying a standard transformer network. The accuracy of the system is furtherimproved by applying careful data cleaning on the target text. We obtain BLEU scores of 0.6 and 0.78 on the test and dev set respectively, which is the best score among the participants of the shared task. Also in the human evaluation the submission reaches the first place. The BLEU score is further improved to 1.08 on the dev set by applying features extracted from a lip reading model.
2014
pdf
abs
Extensions of the Sign Language Recognition and Translation Corpus RWTH-PHOENIX-Weather
Jens Forster
|
Christoph Schmidt
|
Oscar Koller
|
Martin Bellgardt
|
Hermann Ney
Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14)
This paper introduces the RWTH-PHOENIX-Weather 2014, a video-based, large vocabulary, German sign language corpus which has been extended over the last two years, tripling the size of the original corpus. The corpus contains weather forecasts simultaneously interpreted into sign language which were recorded from German public TV and manually annotated using glosses on the sentence level and semi-automatically transcribed spoken German extracted from the videos using the open-source speech recognition system RASR. Spatial annotations of the signers’ hands as well as shape and orientation annotations of the dominant hand have been added for more than 40k respectively 10k video frames creating one of the largest corpora allowing for quantitative evaluation of object tracking algorithms. Further, over 2k signs have been annotated using the SignWriting annotation system, focusing on the shape, orientation, movement as well as spatial contacts of both hands. Finally, extended recognition and translation setups are defined, and baseline results are presented.
2013
pdf
Improving Continuous Sign Language Recognition: Speech Recognition Techniques and System Design
Jens Forster
|
Oscar Koller
|
Christian Oberdörfer
|
Yannick Gweth
|
Hermann Ney
Proceedings of the Fourth Workshop on Speech and Language Processing for Assistive Technologies
pdf
bib
abs
Using viseme recognition to improve a sign language translation system
Christoph Schmidt
|
Oscar Koller
|
Hermann Ney
|
Thomas Hoyoux
|
Justus Piater
Proceedings of the 10th International Workshop on Spoken Language Translation: Papers
Sign language-to-text translation systems are similar to spoken language translation systems in that they consist of a recognition phase and a translation phase. First, the video of a person signing is transformed into a transcription of the signs, which is then translated into the text of a spoken language. One distinctive feature of sign languages is their multi-modal nature, as they can express meaning simultaneously via hand movements, body posture and facial expressions. In some sign languages, certain signs are accompanied by mouthings, i.e. the person silently pronounces the word while signing. In this work, we closely integrate a recognition and translation framework by adding a viseme recognizer (“lip reading system”) based on an active appearance model and by optimizing the recognition system to improve the translation output. The system outperforms the standard approach of separate recognition and translation.
2012
pdf
abs
RWTH-PHOENIX-Weather: A Large Vocabulary Sign Language Recognition and Translation Corpus
Jens Forster
|
Christoph Schmidt
|
Thomas Hoyoux
|
Oscar Koller
|
Uwe Zelle
|
Justus Piater
|
Hermann Ney
Proceedings of the Eighth International Conference on Language Resources and Evaluation (LREC'12)
This paper introduces the RWTH-PHOENIX-Weather corpus, a video-based, large vocabulary corpus of German Sign Language suitable for statistical sign language recognition and translation. In contrastto most available sign language data collections, the RWTH-PHOENIX-Weather corpus has not been recorded for linguistic research but for the use in statistical pattern recognition. The corpus contains weather forecasts recorded from German public TV which are manually annotated using glosses distinguishing sign variants, and time boundaries have been marked on the sentence and the gloss level. Further, the spoken German weather forecast has been transcribed in a semi-automatic fashion using a state-of-the-art automatic speech recognition system. Moreover, an additional translation of the glosses into spoken German has been created to capture allowable translation variability. In addition to the corpus, experimental baseline results for hand and head tracking, statistical sign language recognition and translation are presented.