2022
pdf
bib
Proceedings of the 7th International Workshop on Sign Language Translation and Avatar Technology: The Junction of the Visual and the Textual: Challenges and Perspectives
Eleni Efthimiou
|
Stavroula-Evita Fotinea
|
Thomas Hanke
|
John C. McDonald
|
Dimitar Shterionov
|
Rosalee Wolfe
Proceedings of the 7th International Workshop on Sign Language Translation and Avatar Technology: The Junction of the Visual and the Textual: Challenges and Perspectives
pdf
abs
Signing Avatar Performance Evaluation within EASIER Project
Athanasia-Lida Dimou
|
Vassilis Papavassiliou
|
John McDonald
|
Theodore Goulas
|
Kyriaki Vasilaki
|
Anna Vacalopoulou
|
Stavroula-Evita Fotinea
|
Eleni Efthimiou
|
Rosalee Wolfe
Proceedings of the 7th International Workshop on Sign Language Translation and Avatar Technology: The Junction of the Visual and the Textual: Challenges and Perspectives
The direct involvement of deaf users in the development and evaluation of signing avatars is imperative to achieve legibility and raise trust among synthetic signing technology consumers. A paradigm of constructive cooperation between researchers and the deaf community is the EASIER project , where user driven design and technology development have already started producing results. One major goal of the project is the direct involvement of sign language (SL) users at every stage of development of the project’s signing avatar. As developers wished to consider every parameter of SL articulation including affect and prosody in developing the EASIER SL representation engine, it was necessary to develop a steady communication channel with a wide public of SL users who may act as evaluators and can provide guidance throughout research steps, both during the project’s end-user evaluation cycles and beyond. To this end, we have developed a questionnaire-based methodology, which enables researchers to reach signers of different SL communities on-line and collect their guidance and preferences on all aspects of SL avatar animation that are under study. In this paper, we report on the methodology behind the application of the EASIER evaluation framework for end-user guidance in signing avatar development as it is planned to address signers of four SLs -Greek Sign Language (GSL), French Sign Language (LSF), German Sign Language (DGS) and Swiss German Sign Language (DSGS)- during the first project evaluation cycle. We also briefly report on some interesting findings from the pilot implementation of the questionnaire with content from the Greek Sign Language (GSL).
pdf
abs
A Novel Approach to Managing Lower Face Complexity in Signing Avatars
John McDonald
|
Ronan Johnson
|
Rosalee Wolfe
Proceedings of the 7th International Workshop on Sign Language Translation and Avatar Technology: The Junction of the Visual and the Textual: Challenges and Perspectives
An avatar that produces legible, easy-to-understand signing is one of the essential components to an effective automatic signed/spoken translation system. Facial nonmanual signals are essential to natural signing, but unfortunately signing avatars still do not produce acceptable facial expressions, particularly on the lower face. This paper reports on an innovative method to create more realistic lip postures. The approach manages the complexity of creating lip postures, thus making fewer demands on the artists making them. The method will be integral to our efforts to develop libraries containing lip postures to support the generation of facial expressions for several sign languages.
pdf
abs
Supporting Mouthing in Signed Languages: New innovations and a proposal for future corpus building
Rosalee Wolfe
|
John McDonald
|
Ronan Johnson
|
Ben Sturr
|
Syd Klinghoffer
|
Anthony Bonzani
|
Andrew Alexander
|
Nicole Barnekow
Proceedings of the 7th International Workshop on Sign Language Translation and Avatar Technology: The Junction of the Visual and the Textual: Challenges and Perspectives
A recurring concern, oft repeated, regarding the quality of signing avatars is the lack of proper facial movements, particularly in actions that involve mouthing. An analysis uncovered three challenges contributing to the problem. The first is a difficulty in devising an algorithmic strategy for generating mouthing due to the rich variety of mouthings in sign language. For example, part or all of a spoken word may be mouthed depending on the sign language, the syllabic structure of the mouthed word, as well as the register of address and discourse setting. The second challenge was technological. Previous efforts to create avatar mouthing have failed to model the timing present in mouthing or have failed to properly model the mouth’s appearance. The third challenge is one of usability. Previous editing systems, when they existed, were time-consuming to use. This paper describes efforts to improve avatar mouthing by addressing these challenges, resulting in a new approach for mouthing animation. The paper concludes by proposing an experiment in corpus building using the new approach.
2021
pdf
abs
The Myth of Signing Avatars
John C. McDonald
|
Rosalee Wolfe
|
Eleni Efthimiou
|
Evita Fontinea
|
Frankie Picron
|
Davy Van Landuyt
|
Tina Sioen
|
Annelies Braffort
|
Michael Filhol
|
Sarah Ebling
|
Thomas Hanke
|
Verena Krausneker
Proceedings of the 1st International Workshop on Automatic Translation for Signed and Spoken Languages (AT4SSL)
Development of automatic translation between signed and spoken languages has lagged behind the development of automatic translation between spoken languages, but it is a common misperception that extending machine translation techniques to include signed languages should be a straightforward process. A contributing factor is the lack of an acceptable method for displaying sign language apart from interpreters on video. This position paper examines the challenges of displaying a signed language as a target in automatic translation, analyses the underlying causes and suggests strategies to develop display technologies that are acceptable to sign language communities.
2020
pdf
abs
A survey of Shading Techniques for Facial Deformations on Sign Language Avatars
Ronan Johnson
|
Rosalee Wolfe
Proceedings of the LREC2020 9th Workshop on the Representation and Processing of Sign Languages: Sign Language Resources in the Service of the Language Community, Technological Challenges and Application Perspectives
Of the five phonemic parameters in sign language (handshape, location, palm orientation, movement and nonmanual expressions), the one that still poses the most challenges for effective avatar display is nonmanual signals. Facial nonmanual signals carry a rich combination of linguistic and pragmatic information, but current techniques have yet to portray these in a satisfactory manner. Due to the complexity of facial movements, additional considerations must be taken into account for rendering in real time. Of particular interest is the shading areas of facial deformations to improve legibility. In contrast to more physically-based, compute-intensive techniques that more closely mimic nature, we propose using a simple, classic, Phong illumination model with a dynamically modified layered texture. To localize and control the desired shading, we utilize an opacity channel within the texture. The new approach, when applied to our avatar “Paula”, results in much quicker render times than more sophisticated, computationally intensive techniques.
2015
pdf
Synthesizing the finger alphabet of Swiss German Sign Language and evaluating the comprehensibility of the resulting animations
Sarah Ebling
|
Rosalee Wolfe
|
Jerry Schnepp
|
Souad Baowidan
|
John McDonald
|
Robyn Moncrief
|
Sandra Sidler-Miserez
|
Katja Tissi
Proceedings of SLPAT 2015: 6th Workshop on Speech and Language Processing for Assistive Technologies
2014
pdf
abs
Expanding n-gram analytics in ELAN and a case study for sign synthesis
Rosalee Wolfe
|
John McDonald
|
Larwan Berke
|
Marie Stumbo
Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14)
Corpus analysis is a powerful tool for signed language synthesis. A new extension to ELAN offers expanded n-gram analysis tools including improved search capabilities and an extensive library of statistical measures of association for n-grams. Uncovering and exploring coarticulatory timing effects via corpus analysis requires n-gram analysis to discover the most frequently occurring bigrams. This paper presents an overview of the new tools and a case study in American Sign Language synthesis that exploits these capabilities for computing more natural timing in generated sentences. The new extension provides a time-saving convenience for language researchers using ELAN.