Jeevanthi Liyanapathirana


2021


Using speech technology in the translation process workflow in international organizations: A quantitative and qualitative study
Pierrette Bouillon | Jeevanthi Liyanapathirana
Proceedings of Machine Translation Summit XVIII: Users and Providers Track

In international organizations, the growing demand for translations has increased the need for post-editing. Different studies show that automatic speech recognition systems have the potential to increase the productivity of the translation process as well as the quality. In this talk, we will explore the possibilities of using speech in the translation process by conducting a post-editing experiment with three professional translators in an international organization. Our experiment consisted of comparing three translation methods: speaking the translation with MT as an inspiration (RESpeaking), post-editing the MT suggestions by typing (PE), and editing the MT suggestion using speech (SPE). BLEU and HTER scores were used to compare the three methods. Our study shows that translators did more edits under condition RES, whereas in SPE, the resulting translations were closer to the reference according to the BLEU score and required less edits. Time taken to translate was the least in SPE followed by PE, RES methods and the translators preferred using speech to typing.These results show the potential of speech when it is coupled with post-editing.To the best of our knowledge, this is the first quantitative study conducted on using post-editing and speech together in large scale international organizations.

2019

pdf
Surveying the potential of using speech technologies for post-editing purposes in the context of international organizations: What do professional translators think?
Jeevanthi Liyanapathirana | Pierrette Bouillon | Bartolomé Mesa-Lao
Proceedings of Machine Translation Summit XVII: Translator, Project and User Tracks

2016

pdf
Using the TED Talks to Evaluate Spoken Post-editing of Machine Translation
Jeevanthi Liyanapathirana | Andrei Popescu-Belis
Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16)

This paper presents a solution to evaluate spoken post-editing of imperfect machine translation output by a human translator. We compare two approaches to the combination of machine translation (MT) and automatic speech recognition (ASR): a heuristic algorithm and a machine learning method. To obtain a data set with spoken post-editing information, we use the French version of TED talks as the source texts submitted to MT, and the spoken English counterparts as their corrections, which are submitted to an ASR system. We experiment with various levels of artificial ASR noise and also with a state-of-the-art ASR system. The results show that the combination of MT with ASR improves over both individual outputs of MT and ASR in terms of BLEU scores, especially when ASR performance is low.

2012

pdf
Discourse-level Annotation over Europarl for Machine Translation: Connectives and Pronouns
Andrei Popescu-Belis | Thomas Meyer | Jeevanthi Liyanapathirana | Bruno Cartoni | Sandrine Zufferey
Proceedings of the Eighth International Conference on Language Resources and Evaluation (LREC'12)

This paper describes methods and results for the annotation of two discourse-level phenomena, connectives and pronouns, over a multilingual parallel corpus. Excerpts from Europarl in English and French have been annotated with disambiguation information for connectives and pronouns, for about 3600 tokens. This data is then used in several ways: for cross-linguistic studies, for training automatic disambiguation software, and ultimately for training and testing discourse-aware statistical machine translation systems. The paper presents the annotation procedures and their results in detail, and overviews the first systems trained on the annotated resources and their use for machine translation.