Laura Docio-Fernandez

Also published as: Laura Docío-Fernández


2024

pdf
GiDi: A Virtual Assistant for Screening Protocols at Home
Andrés Piñeiro-Martín | Carmen García-Mateo | Laura Docío-Fernández | María del Carmen López-Pérez | Ignacio Novo-Veleiro
Proceedings of the 16th International Conference on Computational Processing of Portuguese - Vol. 2

pdf
SignaMed: a Cooperative Bilingual LSE-Spanish Dictionary in the Healthcare Domain
Manuel Vázquez-Enríquez | José Luis Alba-Castro | Ania Pérez-Pérez | Carmen Cabeza-Pereiro | Laura Docío-Fernández
Proceedings of the LREC-COLING 2024 11th Workshop on the Representation and Processing of Sign Languages: Evaluation of Sign Language Resources

pdf
FalAI: A Dataset for End-to-end Spoken Language Understanding in a Low-Resource Scenario
Andres Pineiro-Martin | Carmen Garcia-Mateo | Laura Docio-Fernandez | Maria del Carmen Lopez-Perez | Jose Gandarela-Rodriguez
Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)

End-to-end (E2E) Spoken Language Understanding (SLU) systems infer structured information directly from the speech signal using a single model. Due to the success of virtual assistants and the increasing demand for speech interfaces, these architectures are being actively researched for their potential to improve system performance by exploiting acoustic information and avoiding the cascading errors of traditional architectures. However, these systems require large amounts of specific, well-labelled speech data for training, which is expensive to obtain even in English, where the number of public audio datasets for SLU is limited. In this paper, we release the FalAI dataset, the largest public SLU dataset in terms of hours (250 hours), recordings (260,000) and participants (over 10,000), which is also the first SLU dataset in Galician and the first to be obtained in a low-resource scenario. Furthermore, we present new measures of complexity for the text corpora, the strategies followed for the design, collection and validation of the dataset, and we define splits for noisy audio, hesitant audio and audio where the sentence has changed but the structured information is preserved. These novel splits provide a unique resource for testing SLU systems in challenging, real-world scenarios.

2020

pdf
LSE_UVIGO: A Multi-source Database for Spanish Sign Language Recognition
Laura Docío-Fernández | José Luis Alba-Castro | Soledad Torres-Guijarro | Eduardo Rodríguez-Banga | Manuel Rey-Area | Ania Pérez-Pérez | Sonia Rico-Alonso | Carmen García-Mateo
Proceedings of the LREC2020 9th Workshop on the Representation and Processing of Sign Languages: Sign Language Resources in the Service of the Language Community, Technological Challenges and Application Perspectives

This paper presents LSE_UVIGO, a multi-source database designed to foster research on Sign Language Recognition. It is being recorded and compiled for Spanish Sign Language (LSE acronym in Spanish) and contains also spoken Galician language, so it is very well fitted to research on these languages, but also quite useful for fundamental research in any other sign language. LSE_UVIGO is composed of two datasets: LSE_Lex40_UVIGO, a multi-sensor and multi-signer dataset acquired from scratch, designed as an incremental dataset, both in complexity of the visual content and in the variety of signers. It contains static and co-articulated sign recordings, fingerspelled and gloss-based isolated words, and sentences. Its acquisition is done in a controlled lab environment in order to obtain good quality videos with sharp video frames and RGB and depth information, making them suitable to try different approaches to automatic recognition. The second subset, LSE_TVGWeather_UVIGO is being populated from the regional television weather forecasts interpreted to LSE, as a faster way to acquire high quality, continuous LSE recordings with a domain-restricted vocabulary and with a correspondence to spoken sentences.

2014

pdf
Introducing a Framework for the Evaluation of Music Detection Tools
Paula Lopez-Otero | Laura Docio-Fernandez | Carmen Garcia-Mateo
Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14)

The huge amount of multimedia information available nowadays makes its manual processing prohibitive, requiring tools for automatic labelling of these contents. This paper describes a framework for assessing a music detection tool; this framework consists of a database, composed of several hours of radio recordings that include different types of radio programmes, and a set of evaluation measures for evaluating the performance of a music detection tool in detail. A tool for automatically detecting music in audio streams, with application to music information retrieval tasks, is presented as well. The aim of this tool is to discard the audio excerpts that do not contain music in order to avoid their unnecessary processing. This tool applies fingerprinting to different acoustic features extracted from the audio signal in order to remove perceptual irrelevancies, and a support vector machine is trained for classifying these fingerprints in classes music and no-music. The validity of this tool is assessed in the proposed evaluation framework.

2004

pdf
Transcrigal: A Bilingual System for Automatic Indexing of Broadcast News
Carmen Garcia-Mateo | Javier Dieguez-Tirado | Laura Docio-Fernandez | Antonio Cardenal-Lopez
Proceedings of the Fourth International Conference on Language Resources and Evaluation (LREC’04)

2002

pdf
Acoustic Modeling and Training of a Bilingual ASR System when a Minority Language is Involved
Laura Docío-Fernández | Carmen García-Mateo
Proceedings of the Third International Conference on Language Resources and Evaluation (LREC’02)