Malgorzata Anna Ulasik


2022

pdf
SDS-200: A Swiss German Speech to Standard German Text Corpus
Michel Plüss | Manuela Hürlimann | Marc Cuny | Alla Stöckli | Nikolaos Kapotis | Julia Hartmann | Malgorzata Anna Ulasik | Christian Scheller | Yanick Schraner | Amit Jain | Jan Deriu | Mark Cieliebak | Manfred Vogel
Proceedings of the Thirteenth Language Resources and Evaluation Conference

We present SDS-200, a corpus of Swiss German dialectal speech with Standard German text translations, annotated with dialect, age, and gender information of the speakers. The dataset allows for training speech translation, dialect recognition, and speech synthesis systems, among others. The data was collected using a web recording tool that is open to the public. Each participant was given a text in Standard German and asked to translate it to their Swiss German dialect before recording it. To increase the corpus quality, recordings were validated by other participants. The data consists of 200 hours of speech by around 4000 different speakers and covers a large part of the Swiss German dialect landscape. We release SDS-200 alongside a baseline speech translation model, which achieves a word error rate (WER) of 30.3 and a BLEU score of 53.1 on the SDS-200 test set. Furthermore, we use SDS-200 to fine-tune a pre-trained XLS-R model, achieving 21.6 WER and 64.0 BLEU.

2020

pdf
CEASR: A Corpus for Evaluating Automatic Speech Recognition
Malgorzata Anna Ulasik | Manuela Hürlimann | Fabian Germann | Esin Gedik | Fernando Benites | Mark Cieliebak
Proceedings of the Twelfth Language Resources and Evaluation Conference

In this paper, we present CEASR, a Corpus for Evaluating the quality of Automatic Speech Recognition (ASR). It is a data set based on public speech corpora, containing metadata along with transcripts generated by several modern state-of-the-art ASR systems. CEASR provides this data in a unified structure, consistent across all corpora and systems, with normalised transcript texts and metadata. We use CEASR to evaluate the quality of ASR systems by calculating an average Word Error Rate (WER) per corpus, per system and per corpus-system pair. Our experiments show a substantial difference in accuracy between commercial versus open-source ASR tools as well as differences up to a factor ten for single systems on different corpora. Using CEASR allowed us to very efficiently and easily obtain these results. Our corpus enables researchers to perform ASR-related evaluations and various in-depth analyses with noticeably reduced effort, i.e. without the need to collect, process and transcribe the speech data themselves.