2024
pdf
abs
Methods of Automatic Matrix Language Determination for Code-Switched Speech
Olga Iakovenko
|
Thomas Hain
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
Code-switching (CS) is the process of speakers interchanging between two or more languages which in the modern world becomes increasingly common. In order to better describe CS speech the Matrix Language Frame (MLF) theory introduces the concept of a Matrix Language, which is the language that provides the grammatical structure for a CS utterance. In this work the MLF theory was used to develop systems for Matrix Language Identity (MLID) determination. The MLID of English/Mandarin and English/Spanish CS text and speech was compared to acoustic language identity (LID), which is a typical way to identify a language in monolingual utterances. MLID predictors from audio show higher correlation with the textual principles than LID in all cases while also outperforming LID in an MLID recognition task based on F1 macro (60%) and correlation score (0.38). This novel approach has identified that non-English languages (Mandarin and Spanish) are preferred over the English language as the ML contrary to the monolingual choice of LID.
pdf
abs
Improving Acoustic Word Embeddings through Correspondence Training of Self-supervised Speech Representations
Amit Meghanani
|
Thomas Hain
Proceedings of the 18th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long Papers)
Acoustic word embeddings (AWEs) are vector representations of spoken words. An effective method for obtaining AWEs is the Correspondence Auto-Encoder (CAE). In the past, the CAE method has been associated with traditional MFCC features. Representations obtained from self-supervised learning (SSL)-based speech models such as HuBERT, Wav2vec2, etc., are outperforming MFCC in many downstream tasks. However, they have not been well studied in the context of learning AWEs. This work explores the effectiveness of CAE with SSL-based speech representations to obtain improved AWEs. Additionally, the capabilities of SSL-based speech models are explored in cross-lingual scenarios for obtaining AWEs. Experiments are conducted on five languages: Polish, Portuguese, Spanish, French, and English. HuBERT-based CAE model achieves the best results for word discrimination in all languages, despite HuBERT being pre-trained on English only. Also, the HuBERT-based CAE model works well in cross-lingual settings. It outperforms MFCC-based CAE models trained on the target languages when trained on one source language and tested on target languages.
pdf
abs
Automatic Speech Recognition System-Independent Word Error Rate Estimation
Chanho Park
|
Mingjie Chen
|
Thomas Hain
Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)
Word error rate (WER) is a metric used to evaluate the quality of transcriptions produced by Automatic Speech Recognition (ASR) systems. In many applications, it is of interest to estimate WER given a pair of a speech utterance and a transcript. Previous work on WER estimation focused on building models that are trained with a specific ASR system in mind (referred to as ASR system-dependent). These are also domain-dependent and inflexible in real-world applications. In this paper, a hypothesis generation method for ASR System-Independent WER estimation (SIWE) is proposed. In contrast to prior work, the WER estimators are trained using data that simulates ASR system output. Hypotheses are generated using phonetically similar or linguistically more likely alternative words. In WER estimation experiments, the proposed method reaches a similar performance to ASR system-dependent WER estimators on in-domain data and achieves state-of-the-art performance on out-of-domain data. On the out-of-domain data, the SIWE model outperformed the baseline estimators in root mean square error and Pearson correlation coefficient by relative 17.58% and 18.21%, respectively, on Switchboard and CALLHOME. The performance was further improved when the WER of the training set was close to the WER of the evaluation dataset.
2021
pdf
Uncertainty Aware Review Hallucination for Science Article Classification
Korbinian Friedl
|
Georgios Rizos
|
Lukas Stappen
|
Madina Hasan
|
Lucia Specia
|
Thomas Hain
|
Björn Schuller
Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021
2016
pdf
Using phone features to improve dialogue state tracking generalisation to unseen states
Iñigo Casanueva
|
Thomas Hain
|
Mauro Nicolao
|
Phil Green
Proceedings of the 17th Annual Meeting of the Special Interest Group on Discourse and Dialogue
pdf
abs
The OpenCourseWare Metadiscourse (OCWMD) Corpus
Ghada Alharbi
|
Thomas Hain
Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16)
This study describes a new corpus of over 60,000 hand-annotated metadiscourse acts from 106 OpenCourseWare lectures, from two different disciplines: Physics and Economics. Metadiscourse is a set of linguistic expressions that signal different functions in the discourse. This type of language is hypothesised to be helpful in finding a structure in unstructured text, such as lectures discourse. A brief summary is provided about the annotation scheme and labelling procedures, inter-annotator reliability statistics, overall distributional statistics, a description of auxiliary data that will be distributed with the corpus, and information relating to how to obtain the data. The results provide a deeper understanding of lecture structure and confirm the reliable coding of metadiscursive acts in academic lectures across different disciplines. The next stage of our research will be to build a classification model to automate the tagging process, instead of manual annotation, which take time and efforts. This is in addition to the use of these tags as indicators of the higher level structure of lecture discourse.
pdf
abs
A Framework for Collecting Realistic Recordings of Dysarthric Speech - the homeService Corpus
Mauro Nicolao
|
Heidi Christensen
|
Stuart Cunningham
|
Phil Green
|
Thomas Hain
Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16)
This paper introduces a new British English speech database, named the homeService corpus, which has been gathered as part of the homeService project. This project aims to help users with speech and motor disabilities to operate their home appliances using voice commands. The audio recorded during such interactions consists of realistic data of speakers with severe dysarthria. The majority of the homeService corpus is recorded in real home environments where voice control is often the normal means by which users interact with their devices. The collection of the corpus is motivated by the shortage of realistic dysarthric speech corpora available to the scientific community. Along with the details on how the data is organised and how it can be accessed, a brief description of the framework used to make the recordings is provided. Finally, the performance of the homeService automatic recogniser for dysarthric speech trained with single-speaker data from the corpus is provided as an initial baseline. Access to the homeService corpus is provided through the dedicated web page at
http://mini.dcs.shef.ac.uk/resources/homeservice-corpus/. This will also have the most updated description of the data. At the time of writing the collection process is still ongoing.
2015
pdf
Knowledge transfer between speakers for personalised dialogue management
Iñigo Casanueva
|
Thomas Hain
|
Heidi Christensen
|
Ricard Marxer
|
Phil Green
Proceedings of the 16th Annual Meeting of the Special Interest Group on Discourse and Dialogue
2014
pdf
abs
The USFD SLT system for IWSLT 2014
Raymond W. M. Ng
|
Mortaza Doulaty
|
Rama Doddipatla
|
Wilker Aziz
|
Kashif Shah
|
Oscar Saz
|
Madina Hasan
|
Ghada AlHaribi
|
Lucia Specia
|
Thomas Hain
Proceedings of the 11th International Workshop on Spoken Language Translation: Evaluation Campaign
The University of Sheffield (USFD) participated in the International Workshop for Spoken Language Translation (IWSLT) in 2014. In this paper, we will introduce the USFD SLT system for IWSLT. Automatic speech recognition (ASR) is achieved by two multi-pass deep neural network systems with adaptation and rescoring techniques. Machine translation (MT) is achieved by a phrase-based system. The USFD primary system incorporates state-of-the-art ASR and MT techniques and gives a BLEU score of 23.45 and 14.75 on the English-to-French and English-to-German speech-to-text translation task with the IWSLT 2014 data. The USFD contrastive systems explore the integration of ASR and MT by using a quality estimation system to rescore the ASR outputs, optimising towards better translation. This gives a further 0.54 and 0.26 BLEU improvement respectively on the IWSLT 2012 and 2014 evaluation data.
2013
pdf
homeService: Voice-enabled assistive technology in the home using cloud-based automatic speech recognition
Heidi Christensen
|
Iñigo Casanueva
|
Stuart Cunningham
|
Phil Green
|
Thomas Hain
Proceedings of the Fourth Workshop on Speech and Language Processing for Assistive Technologies
2012
pdf
Impact du degré de supervision sur l’adaptation à un domaine d’un modèle de langage à partir du Web (Impact of the level of supervision on Web-based language model domain adaptation) [in French]
Gwénolé Lecorvé
|
John Dines
|
Thomas Hain
|
Petr Motlicek
Proceedings of the Joint Conference JEP-TALN-RECITAL 2012, volume 1: JEP