2010
pdf
abs
The SignSpeak Project - Bridging the Gap Between Signers and Speakers
Philippe Dreuw
|
Hermann Ney
|
Gregorio Martinez
|
Onno Crasborn
|
Justus Piater
|
Jose Miguel Moya
|
Mark Wheatley
Proceedings of the Seventh International Conference on Language Resources and Evaluation (LREC'10)
The SignSpeak project will be the first step to approach sign language recognition and translation at a scientific level already reached in similar research fields such as automatic speech recognition or statistical machine translation of spoken languages. Deaf communities revolve around sign languages as they are their natural means of communication. Although deaf, hard of hearing and hearing signers can communicate without problems amongst themselves, there is a serious challenge for the deaf community in trying to integrate into educational, social and work environments. The overall goal of SignSpeak is to develop a new vision-based technology for recognizing and translating continuous sign language to text. New knowledge about the nature of sign language structure from the perspective of machine recognition of continuous sign language will allow a subsequent breakthrough in the development of a new vision-based technology for continuous sign language recognition and translation. Existing and new publicly available corpora will be used to evaluate the research progress throughout the whole project.
2008
pdf
abs
Benchmark Databases for Video-Based Automatic Sign Language Recognition
Philippe Dreuw
|
Carol Neidle
|
Vassilis Athitsos
|
Stan Sclaroff
|
Hermann Ney
Proceedings of the Sixth International Conference on Language Resources and Evaluation (LREC'08)
A new, linguistically annotated, video database for automatic sign language recognition is presented. The new RWTH-BOSTON-400 corpus, which consists of 843 sentences, several speakers and separate subsets for training, development, and testing is described in detail. For evaluation and benchmarking of automatic sign language recognition, large corpora are needed. Recent research has focused mainly on isolated sign language recognition methods using video sequences that have been recorded under lab conditions using special hardware like data gloves. Such databases have often consisted generally of only one speaker and thus have been speaker-dependent, and have had only small vocabularies. A new database access interface, which was designed and created to provide fast access to the database statistics and content, makes it possible to easily browse and retrieve particular subsets of the video database. Preliminary baseline results on the new corpora are presented. In contradistinction to other research in this area, all databases presented in this paper will be publicly available.
pdf
abs
The ATIS Sign Language Corpus
Jan Bungeroth
|
Daniel Stein
|
Philippe Dreuw
|
Hermann Ney
|
Sara Morrissey
|
Andy Way
|
Lynette van Zijl
Proceedings of the Sixth International Conference on Language Resources and Evaluation (LREC'08)
Systems that automatically process sign language rely on appropriate data. We therefore present the ATIS sign language corpus that is based on the domain of air travel information. It is available for five languages, English, German, Irish sign language, German sign language and South African sign language. The corpus can be used for different tasks like automatic statistical translation and automatic sign language recognition and it allows the specific modeling of spatial references in signing space.
2007
pdf
Hand in hand: automatic sign language to English translation
Daniel Stein
|
Philippe Dreuw
|
Hermann Ney
|
Sara Morrissey
|
Andy Way
Proceedings of the 11th Conference on Theoretical and Methodological Issues in Machine Translation of Natural Languages: Papers
2006
pdf
abs
A German Sign Language Corpus of the Domain Weather Report
Jan Bungeroth
|
Daniel Stein
|
Philippe Dreuw
|
Morteza Zahedi
|
Hermann Ney
Proceedings of the Fifth International Conference on Language Resources and Evaluation (LREC’06)
All systems for automatic sign language translation and recognition, in particular statistical systems, rely on adequately sized corpora. For this purpose, we created the Phoenix corpus that is based on German television weather reports translated into German Sign Language. It comes with a rich annotation of the video data, a bilingual text-based sentence corpus and a monolingual German corpus. All systems for automatic sign language translation and recognition, in particular statistical systems, rely on adequately sized corpora. For this purpose, we created the Phoenix corpus that is based on German television weather reports translated into German Sign Language. It comes with a rich annotation of the video data, a bilingual text-based sentence corpus and a monolingual German corpus.