Silvia Hansen-Schirra

Also published as: Silvia Hansen


2021

pdf bib
Post-Editing Job Profiles for Subtitlers
Anke Tardel | Silvia Hansen-Schirra | Jean Nitzke
Proceedings of the 1st Workshop on Automatic Spoken Language Translation in Real-World Settings (ASLTRW)

Language technologies, such as machine translation (MT), but also the application of artificial intelligence in general and an abundance of CAT tools and platforms have an increasing influence on the translation market. Human interaction with these technologies becomes ever more important as they impact translators’ workflows, work environments, and job profiles. Moreover, it has implications for translator training. One of the tasks that emerged with language technologies is post-editing (PE) where a human translator corrects raw machine translated output according to given guidelines and quality criteria (O’Brien, 2011: 197-198). Already widely used in several traditional translation settings, its use has come into focus in more creative processes such as literary translation and audiovisual translation (AVT) as well. With the integration of MT systems, the translation process should become more efficient. Both economic and cognitive processes are impacted and with it the necessary competences of all stakeholders involved change. In this paper, we want to describe the different potential job profiles and respective competences needed when post-editing subtitles.

2019

pdf bib
Proceedings of the Second MEMENTO workshop on Modelling Parameters of Cognitive Effort in Translation Production
Michael Carl | Silvia Hansen-Schirra
Proceedings of the Second MEMENTO workshop on Modelling Parameters of Cognitive Effort in Translation Production

pdf
Translation Quality and Effort Prediction in Professional Machine Translation Post-Editing
Jennifer Vardaro | Moritz Schaeffer | Silvia Hansen-Schirra
Proceedings of the Second MEMENTO workshop on Modelling Parameters of Cognitive Effort in Translation Production

pdf
Automatization of subprocesses in subtitling
Anke Tardel | Silvia Hansen-Schirra | Silke Gutermuth | Moritz Schaeffer
Proceedings of the Second MEMENTO workshop on Modelling Parameters of Cognitive Effort in Translation Production

2006

pdf
Multi-dimensional Annotation and Alignment in an English-German Translation Corpus
Silvia Hansen-Schirra | Stella Neumann | Mihaela Vela
Proceedings of the 5th Workshop on NLP and XML (NLPXML-2006): Multi-Dimensional Markup in Natural Language Processing

2004

pdf
The MULI Project: Annotation and Analysis of Information Structure in German and English
Stefan Baumann | Caren Brinckmann | Silvia Hansen-Schirra | Geert-Jan Kruijff | Ivana Kruijff-Korbayová | Stella Neumann | Erich Steiner | Elke Teich | Hans Uszkoreit
Proceedings of the Fourth International Conference on Language Resources and Evaluation (LREC’04)

pdf bib
Proceedings of the 5th International Workshop on Linguistically Interpreted Corpora
Silvia Hansen-Schirra | Stephan Oepen | Hans Uszkoreit
Proceedings of the 5th International Workshop on Linguistically Interpreted Corpora

pdf
Towards a Dependency-Based Gold Standard for German Parsers. The TIGER Dependency Bank
Martin Forst | Núria Bertomeu | Berthold Crysmann | Frederik Fouvry | Silvia Hansen-Schirra | Valia Kordoni
Proceedings of the 5th International Workshop on Linguistically Interpreted Corpora

pdf
Multi-dimensional annotation of linguistic corpora for investigating information structure
Stefan Baumann | Caren Brinckmann | Silvia Hansen-Schirra | Geert-Jan Kruijff | Ivana Kruijff-Korbayová | Stella Neumann | Elke Teich
Proceedings of the Workshop Frontiers in Corpus Annotation at HLT-NAACL 2004

2002

pdf
Developments in the TIGER Annotation Scheme and their Realization in the Corpus
Sabine Brants | Silvia Hansen
Proceedings of the Third International Conference on Language Resources and Evaluation (LREC’02)

1999

pdf
Linking translation memories with example-based machine translation
Michael Carl | Silvia Hansen
Proceedings of Machine Translation Summit VII

The paper reports on experiments which compare the translation outcome of three corpus-based MT systems, a string-based translation memory (STM), a lexeme-based translation memory (LTM) and the example-based machine translation (EBMT) system EDGAR. We use a fully automatic evaluation method to compare the outcome of each MT system and discuss the results. We investigate the benefits for the linkage of different MT strategies such as TMsystems and EBMT systems.