2024
pdf
abs
Analyzing the Role of Part-of-Speech in Code-Switching: A Corpus-Based Study
Jie Chi
|
Peter Bell
Findings of the Association for Computational Linguistics: EACL 2024
Code-switching (CS) is a common linguistic phenomenon wherein speakers fluidly transition between languages in conversation. While the cognitive processes driving CS remain a complex domain, earlier investigations have shed light on its multifaceted triggers. This study delves into the influence of Part-of-Speech (POS) on the propensity of bilinguals to engage in CS, employing a comprehensive analysis of Spanish-English and Mandarin-English corpora. Compared with prior research, our findings not only affirm the existence of a statistically significant connection between POS and the likelihood of CS across language pairs, but notably find this relationship exhibits its maximum strength in proximity to CS instances, progressively diminishing as tokens distance themselves from these CS points.
2023
pdf
abs
Do dialogue representations align with perception? An empirical study
Sarenne Wallbridge
|
Peter Bell
|
Catherine Lai
Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics
There has been a surge of interest regarding the alignment of large-scale language models with human language comprehension behaviour. The majority of this research investigates comprehension behaviours from reading isolated, written sentences. We propose studying the perception of dialogue, focusing on an intrinsic form of language use: spoken conversations. Using the task of predicting upcoming dialogue turns, we ask whether turn plausibility scores produced by state-of-the-art language models correlate with human judgements. We find a strong correlation for some but not all models: masked language models produce stronger correlations than auto-regressive models. In doing so, we quantify human performance on the response selection task for open-domain spoken conversation. To the best of our knowledge, this is the first such quantification. We find that response selection performance can be used as a coarse proxy for the strength of correlation with human judgements, however humans and models make different response selection mistakes. The model which produces the strongest correlation also outperforms human response selection performance. Through ablation studies, we show that pre-trained language models provide a useful basis for turn representations; however, fine-grained contextualisation, inclusion of dialogue structure information, and fine-tuning towards response selection all boost response selection accuracy by over 30 absolute points.
2022
pdf
abs
Improving Code-switched ASR with Linguistic Information
Jie Chi
|
Peter Bell
Proceedings of the 29th International Conference on Computational Linguistics
This paper seeks to improve the performance of automatic speech recognition (ASR) systems operating on code-switched speech. Code-switching refers to the alternation of languages within a conversation, a phenomenon that is of increasing importance considering the rapid rise in the number of bilingual speakers in the world. It is particularly challenging for ASR owing to the relative scarcity of code-switching speech and text data, even when the individual languages are themselves well-resourced. This paper proposes to overcome this challenge by applying linguistic theories in order to generate more realistic code-switching text, necessary for language modelling in ASR. Working with English-Spanish code-switching, we find that Equivalence Constraint theory and part-of-speech labelling are particularly helpful for text generation, and bring 2% improvement to ASR performance.
2021
pdf
abs
Segmenting Subtitles for Correcting ASR Segmentation Errors
David Wan
|
Chris Kedzie
|
Faisal Ladhak
|
Elsbeth Turcan
|
Petra Galuscakova
|
Elena Zotkina
|
Zhengping Jiang
|
Peter Bell
|
Kathleen McKeown
Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume
Typical ASR systems segment the input audio into utterances using purely acoustic information, which may not resemble the sentence-like units that are expected by conventional machine translation (MT) systems for Spoken Language Translation. In this work, we propose a model for correcting the acoustic segmentation of ASR models for low-resource languages to improve performance on downstream tasks. We propose the use of subtitles as a proxy dataset for correcting ASR acoustic segmentation, creating synthetic acoustic utterances by modeling common error modes. We train a neural tagging model for correcting ASR acoustic segmentation and show that it improves downstream performance on MT and audio-document cross-language information retrieval (CLIR).
2020
pdf
abs
Subtitles to Segmentation: Improving Low-Resource Speech-to-TextTranslation Pipelines
David Wan
|
Zhengping Jiang
|
Chris Kedzie
|
Elsbeth Turcan
|
Peter Bell
|
Kathy McKeown
Proceedings of the workshop on Cross-Language Search and Summarization of Text and Speech (CLSSTS2020)
In this work, we focus on improving ASR output segmentation in the context of low-resource language speech-to-text translation. ASR output segmentation is crucial, as ASR systems segment the input audio using purely acoustic information and are not guaranteed to output sentence-like segments. Since most MT systems expect sentences as input, feeding in longer unsegmented passages can lead to sub-optimal performance. We explore the feasibility of using datasets of subtitles from TV shows and movies to train better ASR segmentation models. We further incorporate part-of-speech (POS) tag and dependency label information (derived from the unsegmented ASR outputs) into our segmentation model. We show that this noisy syntactic information can improve model accuracy. We evaluate our models intrinsically on segmentation quality and extrinsically on downstream MT performance, as well as downstream tasks including cross-lingual information retrieval (CLIR) tasks and human relevance assessments. Our model shows improved performance on downstream tasks for Lithuanian and Bulgarian.
2017
pdf
abs
The SUMMA Platform Prototype
Renars Liepins
|
Ulrich Germann
|
Guntis Barzdins
|
Alexandra Birch
|
Steve Renals
|
Susanne Weber
|
Peggy van der Kreeft
|
Hervé Bourlard
|
João Prieto
|
Ondřej Klejch
|
Peter Bell
|
Alexandros Lazaridis
|
Alfonso Mendes
|
Sebastian Riedel
|
Mariana S. C. Almeida
|
Pedro Balage
|
Shay B. Cohen
|
Tomasz Dwojak
|
Philip N. Garner
|
Andreas Giefer
|
Marcin Junczys-Dowmunt
|
Hina Imran
|
David Nogueira
|
Ahmed Ali
|
Sebastião Miranda
|
Andrei Popescu-Belis
|
Lesly Miculicich Werlen
|
Nikos Papasarantopoulos
|
Abiola Obamuyide
|
Clive Jones
|
Fahim Dalvi
|
Andreas Vlachos
|
Yang Wang
|
Sibo Tong
|
Rico Sennrich
|
Nikolaos Pappas
|
Shashi Narayan
|
Marco Damonte
|
Nadir Durrani
|
Sameer Khurana
|
Ahmed Abdelali
|
Hassan Sajjad
|
Stephan Vogel
|
David Sheppey
|
Chris Hernon
|
Jeff Mitchell
Proceedings of the Software Demonstrations of the 15th Conference of the European Chapter of the Association for Computational Linguistics
We present the first prototype of the SUMMA Platform: an integrated platform for multilingual media monitoring. The platform contains a rich suite of low-level and high-level natural language processing technologies: automatic speech recognition of broadcast media, machine translation, automated tagging and classification of named entities, semantic parsing to detect relationships between entities, and automatic construction / augmentation of factual knowledge bases. Implemented on the Docker platform, it can easily be deployed, customised, and scaled to large volumes of incoming media streams.
2014
pdf
abs
The UEDIN ASR systems for the IWSLT 2014 evaluation
Peter Bell
|
Pawel Swietojanski
|
Joris Driesen
|
Mark Sinclair
|
Fergus McInnes
|
Steve Renals
Proceedings of the 11th International Workshop on Spoken Language Translation: Evaluation Campaign
This paper describes the University of Edinburgh (UEDIN) ASR systems for the 2014 IWSLT Evaluation. Notable features of the English system include deep neural network acoustic models in both tandem and hybrid configuration with the use of multi-level adaptive networks, LHUC adaptation and Maxout units. The German system includes lightly supervised training and a new method for dictionary generation. Our voice activity detection system now uses a semi-Markov model to incorporate a prior on utterance lengths. There are improvements of up to 30% relative WER on the tst2013 English test set.
2013
pdf
abs
Description of the UEDIN system for German ASR
Joris Driesen
|
Peter Bell
|
Mark Sinclair
|
Steve Renals
Proceedings of the 10th International Workshop on Spoken Language Translation: Evaluation Campaign
In this paper we describe the ASR system for German built at the University of Edinburgh (UEDIN) for the 2013 IWSLT evaluation campaign. For ASR, the major challenge to overcome, was to find suitable acoustic training data. Due to the lack of expertly transcribed acoustic speech data for German, acoustic model training had to be performed on publicly available data crawled from the internet. For evaluation, lack of a manual segmentation into utterances was handled in two different ways: by generating an automatic segmentation, and by treating entire input files as a single segment. Demonstrating the latter method is superior in the current task, we obtained a WER of 28.16% on the dev set and 36.21% on the test set.
pdf
abs
The UEDIN English ASR system for the IWSLT 2013 evaluation
Peter Bell
|
Fergus McInnes
|
Siva Reddy Gangireddy
|
Mark Sinclair
|
Alexandra Birch
|
Steve Renals
Proceedings of the 10th International Workshop on Spoken Language Translation: Evaluation Campaign
This paper describes the University of Edinburgh (UEDIN) English ASR system for the IWSLT 2013 Evaluation. Notable features of the system include deep neural network acoustic models in both tandem and hybrid configuration, cross-domain adaptation with multi-level adaptive networks, and the use of a recurrent neural network language model. Improvements to our system since the 2012 evaluation – which include the use of a significantly improved n-gram language model – result in a 19% relative WER reduction on the tst2012 set.
2012
pdf
Evaluating language understanding accuracy with respect to objective outcomes in a dialogue system
Myroslava O. Dzikovska
|
Peter Bell
|
Amy Isard
|
Johanna D. Moore
Proceedings of the 13th Conference of the European Chapter of the Association for Computational Linguistics
pdf
abs
The UEDIN systems for the IWSLT 2012 evaluation
Eva Hasler
|
Peter Bell
|
Arnab Ghoshal
|
Barry Haddow
|
Philipp Koehn
|
Fergus McInnes
|
Steve Renals
|
Pawel Swietojanski
Proceedings of the 9th International Workshop on Spoken Language Translation: Evaluation Campaign
This paper describes the University of Edinburgh (UEDIN) systems for the IWSLT 2012 Evaluation. We participated in the ASR (English), MT (English-French, German-English) and SLT (English-French) tracks.
2011
pdf
Beetle II: an adaptable tutorial dialogue system
Myroslava Dzikovska
|
Amy Isard
|
Peter Bell
|
Johanna Moore
|
Natalie Steinhauser
|
Gwendolyn Campbell
Proceedings of the SIGDIAL 2011 Conference