Katsiaryna Mlynchyk
2024
Error-preserving Automatic Speech Recognition of Young English Learners’ Language
Janick Michot
|
Manuela Hürlimann
|
Jan Deriu
|
Luzia Sauer
|
Katsiaryna Mlynchyk
|
Mark Cieliebak
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
One of the central skills that language learners need to practice is speaking the language. Currently, students in school do not get enough speaking opportunities and lack conversational practice. The recent advances in speech technology and natural language processing allow the creation of novel tools to practice their speaking skills. In this work, we tackle the first component of such a pipeline, namely, the automated speech recognition module (ASR). State-of-the-art models are often trained on adult read-aloud data by native speakers and do not transfer well to young language learners’ speech. Second, most ASR systems contain a powerful language model, which smooths out mistakes made by the speakers. To give corrective feedback, which is a crucial part of language learning, the ASR systems in our setting need to preserve the mistakes made by the language learners. In this work, we build an ASR system that satisfies these requirements: it works on spontaneous speech by young language learners and preserves their mistakes. For this, we collected a corpus containing around 85 hours of English audio spoken by Swiss learners from grades 4 to 6 on different language learning tasks, which we used to train an ASR model. Our experiments show that our model benefits from direct fine-tuning of children’s voices and has a much higher error preservation rate.
2020
A Methodology for Creating Question Answering Corpora Using Inverse Data Annotation
Jan Deriu
|
Katsiaryna Mlynchyk
|
Philippe Schläpfer
|
Alvaro Rodrigo
|
Dirk von Grünigen
|
Nicolas Kaiser
|
Kurt Stockinger
|
Eneko Agirre
|
Mark Cieliebak
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics
In this paper, we introduce a novel methodology to efficiently construct a corpus for question answering over structured data. For this, we introduce an intermediate representation that is based on the logical query plan in a database, called Operation Trees (OT). This representation allows us to invert the annotation process without loosing flexibility in the types of queries that we generate. Furthermore, it allows for fine-grained alignment of the tokens to the operations. Thus, we randomly generate OTs from a context free grammar and annotators just have to write the appropriate question and assign the tokens. We compare our corpus OTTA (Operation Trees and Token Assignment), a large semantic parsing corpus for evaluating natural language interfaces to databases, to Spider and LC-QuaD 2.0 and show that our methodology more than triples the annotation speed while maintaining the complexity of the queries. Finally, we train a state-of-the-art semantic parsing model on our data and show that our dataset is a challenging dataset and that the token alignment can be leveraged to significantly increase the performance.
Search
Co-authors
- Jan Milan Deriu 2
- Mark Cieliebak 2
- Janick Michot 1
- Manuela Huerlimann 1
- Luzia Sauer 1
- show all...
Venues
- acl2