2023
pdf
Turning Whisper into Real-Time Transcription System
Dominik Macháček
|
Raj Dabre
|
Ondřej Bojar
Proceedings of the 13th International Joint Conference on Natural Language Processing and the 3rd Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics: System Demonstrations
pdf
abs
Robustness of Multi-Source MT to Transcription Errors
Dominik Macháček
|
Peter Polák
|
Ondřej Bojar
|
Raj Dabre
Findings of the Association for Computational Linguistics: ACL 2023
Automatic speech translation is sensitive to speech recognition errors, but in a multilingual scenario, the same content may be available in various languages via simultaneous interpreting, dubbing or subtitling. In this paper, we hypothesize that leveraging multiple sources will improve translation quality if the sources complement one another in terms of correct information they contain. To this end, we first show that on a 10-hour ESIC corpus, the ASR errors in the original English speech and its simultaneous interpreting into German and Czech are mutually independent. We then use two sources, English and German, in a multi-source setting for translation into Czech to establish its robustness to ASR errors. Furthermore, we observe this robustness when translating both noisy sources together in a simultaneous translation setting. Our results show that multi-source neural machine translation has the potential to be useful in a real-time simultaneous translation setting, thereby motivating further investigation in this area.
pdf
abs
MT Metrics Correlate with Human Ratings of Simultaneous Speech Translation
Dominik Macháček
|
Ondřej Bojar
|
Raj Dabre
Proceedings of the 20th International Conference on Spoken Language Translation (IWSLT 2023)
There have been several meta-evaluation studies on the correlation between human ratings and offline machine translation (MT) evaluation metrics such as BLEU, chrF2, BertScore and COMET. These metrics have been used to evaluate simultaneous speech translation (SST) but their correlations with human ratings of SST, which has been recently collected as Continuous Ratings (CR), are unclear. In this paper, we leverage the evaluations of candidate systems submitted to the English-German SST task at IWSLT 2022 and conduct an extensive correlation analysis of CR and the aforementioned metrics. Our study reveals that the offline metrics are well correlated with CR and can be reliably used for evaluating machine translation in simultaneous mode, with some limitations on the test set size. We conclude that given the current quality levels of SST, these metrics can be used as proxies for CR, alleviating the need for large scale human evaluation. Additionally, we observe that correlations of the metrics with translation as a reference is significantly higher than with simultaneous interpreting, and thus we recommend the former for reliable evaluation.
2022
pdf
abs
Continuous Rating as Reliable Human Evaluation of Simultaneous Speech Translation
Dávid Javorský
|
Dominik Macháček
|
Ondřej Bojar
Proceedings of the Seventh Conference on Machine Translation (WMT)
Simultaneous speech translation (SST) can be evaluated on simulated online events where human evaluators watch subtitled videos and continuously express their satisfaction by pressing buttons (so called Continuous Rating). Continuous Rating is easy to collect, but little is known about its reliability, or relation to comprehension of foreign language document by SST users. In this paper, we contrast Continuous Rating with factual questionnaires on judges with different levels of source language knowledge. Our results show that Continuous Rating is easy and reliable SST quality assessment if the judges have at least limited knowledge of the source language. Our study indicates users’ preferences on subtitle layout and presentation style and, most importantly, provides a significant evidence that users with advanced source language knowledge prefer low latency over fewer re-translations.
2021
pdf
abs
ELITR Multilingual Live Subtitling: Demo and Strategy
Ondřej Bojar
|
Dominik Macháček
|
Sangeet Sagar
|
Otakar Smrž
|
Jonáš Kratochvíl
|
Peter Polák
|
Ebrahim Ansari
|
Mohammad Mahmoudi
|
Rishu Kumar
|
Dario Franceschini
|
Chiara Canton
|
Ivan Simonini
|
Thai-Son Nguyen
|
Felix Schneider
|
Sebastian Stüker
|
Alex Waibel
|
Barry Haddow
|
Rico Sennrich
|
Philip Williams
Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: System Demonstrations
This paper presents an automatic speech translation system aimed at live subtitling of conference presentations. We describe the overall architecture and key processing components. More importantly, we explain our strategy for building a complex system for end-users from numerous individual components, each of which has been tested only in laboratory conditions. The system is a working prototype that is routinely tested in recognizing English, Czech, and German speech and presenting it translated simultaneously into 42 target languages.
2020
pdf
abs
CUNI Neural ASR with Phoneme-Level Intermediate Step for~Non-Native~SLT at IWSLT 2020
Peter Polák
|
Sangeet Sagar
|
Dominik Macháček
|
Ondřej Bojar
Proceedings of the 17th International Conference on Spoken Language Translation
In this paper, we present our submission to the Non-Native Speech Translation Task for IWSLT 2020. Our main contribution is a proposed speech recognition pipeline that consists of an acoustic model and a phoneme-to-grapheme model. As an intermediate representation, we utilize phonemes. We demonstrate that the proposed pipeline surpasses commercially used automatic speech recognition (ASR) and submit it into the ASR track. We complement this ASR with off-the-shelf MT systems to take part also in the speech translation track.
pdf
abs
ELITR Non-Native Speech Translation at IWSLT 2020
Dominik Macháček
|
Jonáš Kratochvíl
|
Sangeet Sagar
|
Matúš Žilinec
|
Ondřej Bojar
|
Thai-Son Nguyen
|
Felix Schneider
|
Philip Williams
|
Yuekun Yao
Proceedings of the 17th International Conference on Spoken Language Translation
This paper is an ELITR system submission for the non-native speech translation task at IWSLT 2020. We describe systems for offline ASR, real-time ASR, and our cascaded approach to offline SLT and real-time SLT. We select our primary candidates from a pool of pre-existing systems, develop a new end-to-end general ASR system, and a hybrid ASR trained on non-native speech. The provided small validation set prevents us from carrying out a complex validation, but we submit all the unselected candidates for contrastive evaluation on the test set.
pdf
abs
Removing European Language Barriers with Innovative Machine Translation Technology
Dario Franceschini
|
Chiara Canton
|
Ivan Simonini
|
Armin Schweinfurth
|
Adelheid Glott
|
Sebastian Stüker
|
Thai-Son Nguyen
|
Felix Schneider
|
Thanh-Le Ha
|
Alex Waibel
|
Barry Haddow
|
Philip Williams
|
Rico Sennrich
|
Ondřej Bojar
|
Sangeet Sagar
|
Dominik Macháček
|
Otakar Smrž
Proceedings of the 1st International Workshop on Language Technology Platforms
This paper presents our progress towards deploying a versatile communication platform in the task of highly multilingual live speech translation for conferences and remote meetings live subtitling. The platform has been designed with a focus on very low latency and high flexibility while allowing research prototypes of speech and text processing tools to be easily connected, regardless of where they physically run. We outline our architecture solution and also briefly compare it with the ELG platform. Technical details are provided on the most important components and we summarize the test deployment events we ran so far.
pdf
abs
ELITR: European Live Translator
Ondřej Bojar
|
Dominik Macháček
|
Sangeet Sagar
|
Otakar Smrž
|
Jonáš Kratochvíl
|
Ebrahim Ansari
|
Dario Franceschini
|
Chiara Canton
|
Ivan Simonini
|
Thai-Son Nguyen
|
Felix Schneider
|
Sebastian Stücker
|
Alex Waibel
|
Barry Haddow
|
Rico Sennrich
|
Philip Williams
Proceedings of the 22nd Annual Conference of the European Association for Machine Translation
ELITR (European Live Translator) project aims to create a speech translation system for simultaneous subtitling of conferences and online meetings targetting up to 43 languages. The technology is tested by the Supreme Audit Office of the Czech Republic and by alfaview®, a German online conferencing system. Other project goals are to advance document-level and multilingual machine translation, automatic speech recognition, and automatic minuting.
2019
pdf
abs
CUNI Systems for the Unsupervised News Translation Task in WMT 2019
Ivana Kvapilíková
|
Dominik Macháček
|
Ondřej Bojar
Proceedings of the Fourth Conference on Machine Translation (Volume 2: Shared Task Papers, Day 1)
In this paper we describe the CUNI translation system used for the unsupervised news shared task of the ACL 2019 Fourth Conference on Machine Translation (WMT19). We follow the strategy of Artetxe ae at. (2018b), creating a seed phrase-based system where the phrase table is initialized from cross-lingual embedding mappings trained on monolingual data, followed by a neural machine translation system trained on synthetic parallel data. The synthetic corpus was produced from a monolingual corpus by a tuned PBMT model refined through iterative back-translation. We further focus on the handling of named entities, i.e. the part of vocabulary where the cross-lingual embedding mapping suffers most. Our system reaches a BLEU score of 15.3 on the German-Czech WMT19 shared task.
pdf
abs
English-Czech Systems in WMT19: Document-Level Transformer
Martin Popel
|
Dominik Macháček
|
Michal Auersperger
|
Ondřej Bojar
|
Pavel Pecina
Proceedings of the Fourth Conference on Machine Translation (Volume 2: Shared Task Papers, Day 1)
We describe our NMT systems submitted to the WMT19 shared task in English→Czech news translation. Our systems are based on the Transformer model implemented in either Tensor2Tensor (T2T) or Marian framework. We aimed at improving the adequacy and coherence of translated documents by enlarging the context of the source and target. Instead of translating each sentence independently, we split the document into possibly overlapping multi-sentence segments. In case of the T2T implementation, this “document-level”-trained system achieves a +0.6 BLEU improvement (p < 0.05) relative to the same system applied on isolated sentences. To assess the potential effect document-level models might have on lexical coherence, we performed a semi-automatic analysis, which revealed only a few sentences improved in this aspect. Thus, we cannot draw any conclusions from this week evidence.
2017
pdf
abs
Multilingual Ontologies for the Representation and Processing of Folktales
Thierry Declerck
|
Anastasija Aman
|
Martin Banzer
|
Dominik Macháček
|
Lisa Schäfer
|
Natalia Skachkova
Proceedings of the First Workshop on Language technology for Digital Humanities in Central and (South-)Eastern Europe
We describe work done in the field of folkloristics and consisting in creating ontologies based on well-established studies proposed by “classical” folklorists. This work is supporting the availability of a huge amount of digital and structured knowledge on folktales to digital humanists. The ontological encoding of past and current motif-indexation and classification systems for folktales was in the first step limited to English language data. This led us to focus on making those newly generated formal knowledge sources available in a few more languages, like German, Russian and Bulgarian. We stress the importance of achieving this multilingual extension of our ontologies at a larger scale, in order for example to support the automated analysis and classification of such narratives in a large variety of languages, as those are getting more and more accessible on the Web.