2024
pdf
abs
Language Technologies as If People Mattered: Centering Communities in Language Technology Development
Nina Markl
|
Lauren Hall-Lew
|
Catherine Lai
Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)
In this position paper we argue that researchers interested in language and/or language technologies should attend to challenges of linguistic and algorithmic injustice together with language communities. We put forward that this can be done by drawing together diverse scholarly and experiential insights, building strong interdisciplinary teams, and paying close attention to the wider social, cultural and historical contexts of both language communities and the technologies we aim to develop.
2023
pdf
abs
Do dialogue representations align with perception? An empirical study
Sarenne Wallbridge
|
Peter Bell
|
Catherine Lai
Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics
There has been a surge of interest regarding the alignment of large-scale language models with human language comprehension behaviour. The majority of this research investigates comprehension behaviours from reading isolated, written sentences. We propose studying the perception of dialogue, focusing on an intrinsic form of language use: spoken conversations. Using the task of predicting upcoming dialogue turns, we ask whether turn plausibility scores produced by state-of-the-art language models correlate with human judgements. We find a strong correlation for some but not all models: masked language models produce stronger correlations than auto-regressive models. In doing so, we quantify human performance on the response selection task for open-domain spoken conversation. To the best of our knowledge, this is the first such quantification. We find that response selection performance can be used as a coarse proxy for the strength of correlation with human judgements, however humans and models make different response selection mistakes. The model which produces the strongest correlation also outperforms human response selection performance. Through ablation studies, we show that pre-trained language models provide a useful basis for turn representations; however, fine-grained contextualisation, inclusion of dialogue structure information, and fine-tuning towards response selection all boost response selection accuracy by over 30 absolute points.
pdf
abs
Synthesising Personality with Neural Speech Synthesis
Shilin Gao
|
Matthew P. Aylett
|
David A. Braude
|
Catherine Lai
Proceedings of the 24th Annual Meeting of the Special Interest Group on Discourse and Dialogue
Matching the personality of conversational agent to the personality of the user can significantly improve the user experience, with many successful examples in text-based chatbots. It is also important for a voice-based system to be able to alter the personality of the speech as perceived by the users. In this pilot study, fifteen voices were rated using Big Five personality traits. Five content-neutral sentences were chosen for the listening tests. The audio data, together with two rated traits (Extroversion and Agreeableness), were used to train a neural speech synthesiser based on one male and one female voices. The effect of altering the personality trait features was evaluated by a second listening test. Both perceived extroversion and agreeableness in the synthetic voices were affected significantly. The controllable range was limited due to a lack of variance in the source audio data. The perceived personality traits correlated with each other and with the naturalness of the speech. Future work can be making a chatbot speak in a voice with a pre-defined or adaptive personality by using personality synthesis in speech together with text-based personality generation.
2021
pdf
abs
Context-sensitive evaluation of automatic speech recognition: considering user experience & language variation
Nina Markl
|
Catherine Lai
Proceedings of the First Workshop on Bridging Human–Computer Interaction and Natural Language Processing
Commercial Automatic Speech Recognition (ASR) systems tend to show systemic predictive bias for marginalised speaker/user groups. We highlight the need for an interdisciplinary and context-sensitive approach to documenting this bias incorporating perspectives and methods from sociolinguistics, speech & language technology and human-computer interaction in the context of a case study. We argue evaluation of ASR systems should be disaggregated by speaker group, include qualitative error analysis, and consider user experience in a broader sociolinguistic and social context.
2018
pdf
abs
Polarity and Intensity: the Two Aspects of Sentiment Analysis
Leimin Tian
|
Catherine Lai
|
Johanna Moore
Proceedings of Grand Challenge and Workshop on Human Multimodal Language (Challenge-HML)
Current multimodal sentiment analysis frames sentiment score prediction as a general Machine Learning task. However, what the sentiment score actually represents has often been overlooked. As a measurement of opinions and affective states, a sentiment score generally consists of two aspects: polarity and intensity. We decompose sentiment scores into these two aspects and study how they are conveyed through individual modalities and combined multimodal models in a naturalistic monologue setting. In particular, we build unimodal and multimodal multi-task learning models with sentiment score prediction as the main task and polarity and/or intensity classification as the auxiliary tasks. Our experiments show that sentiment analysis benefits from multi-task learning, and individual modalities differ when conveying the polarity and intensity aspects of sentiment.
2005
pdf
bib
LPath+: A First-Order Complete Language for Linguistic Tree Query
Catherine Lai
|
Steven Bird
Proceedings of the 19th Pacific Asia Conference on Language, Information and Computation
2004
pdf
Querying and Updating Treebanks: A Critical Survey and Requirements Analysis
Catherine Lai
|
Steven Bird
Proceedings of the Australasian Language Technology Workshop 2004