This is an internal, incomplete preview of a proposed change to the ACL Anthology.
For efficiency reasons, we don't generate MODS or Endnote formats, and the preview may be incomplete in other ways, or contain mistakes.
Do not treat this content as an official publication.
Malgorzata AnnaUlasik
Fixing paper assignments
Please select all papers that belong to the same person.
Indicate below which author they should be assigned to.
Nous présentons des méthodes de traitement des données dynamiques permettant de retracer le processus de production de phrases. En tant qu’activité incrémentielle et non linéaire, l’écriture produit des versions intermédiaires incomplètes ou mal formées qui évoluent au fil de fréquentes révisions. À l’aide d’outils d’enregistrement des frappes et de traitement du langage naturel (TALN), nous proposons un cadre permettant de reconstruire automatiquement l’historique des phrases. De plus, nous implémentons dans THEtool un modèle qui synchronise l’historique des phrases avec les événements de révision et les patterns de pause. Cette représentation multicouche facilite la compréhension détaillée des aspects cognitifs et linguistiques de la construction des phrases.
We present SDS-200, a corpus of Swiss German dialectal speech with Standard German text translations, annotated with dialect, age, and gender information of the speakers. The dataset allows for training speech translation, dialect recognition, and speech synthesis systems, among others. The data was collected using a web recording tool that is open to the public. Each participant was given a text in Standard German and asked to translate it to their Swiss German dialect before recording it. To increase the corpus quality, recordings were validated by other participants. The data consists of 200 hours of speech by around 4000 different speakers and covers a large part of the Swiss German dialect landscape. We release SDS-200 alongside a baseline speech translation model, which achieves a word error rate (WER) of 30.3 and a BLEU score of 53.1 on the SDS-200 test set. Furthermore, we use SDS-200 to fine-tune a pre-trained XLS-R model, achieving 21.6 WER and 64.0 BLEU.
In this paper, we present CEASR, a Corpus for Evaluating the quality of Automatic Speech Recognition (ASR). It is a data set based on public speech corpora, containing metadata along with transcripts generated by several modern state-of-the-art ASR systems. CEASR provides this data in a unified structure, consistent across all corpora and systems, with normalised transcript texts and metadata. We use CEASR to evaluate the quality of ASR systems by calculating an average Word Error Rate (WER) per corpus, per system and per corpus-system pair. Our experiments show a substantial difference in accuracy between commercial versus open-source ASR tools as well as differences up to a factor ten for single systems on different corpora. Using CEASR allowed us to very efficiently and easily obtain these results. Our corpus enables researchers to perform ASR-related evaluations and various in-depth analyses with noticeably reduced effort, i.e. without the need to collect, process and transcribe the speech data themselves.