Ulla Petti
2024
LoSST-AD: A Longitudinal Corpus for Tracking Alzheimer’s Disease Related Changes in Spontaneous Speech
Ulla Petti
|
Anna Korhonen
Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)
Language-based biomarkers have shown promising results in differentiating those with Alzheimer’s disease (AD) diagnosis from healthy individuals, but the earliest changes in language are thought to start years or even decades before the diagnosis. Detecting these changes is critical to allow early interventions, but research into the earliest signs is challenging, as it requires large longitudinal datasets that are time-consuming and expensive to collect. There is a need for alternative methods for tracking longitudinal language change, including Natural Language Processing (NLP) and speech recognition technologies. We present a novel corpus that can enable this: a corpus of transcripts of public interviews with 20 famous figures, half of whom will eventually be diagnosed with AD, recorded over several decades. We evaluate the corpus by validating patterns of vocabulary richness changes known from literature, such as decline in noun frequency, word length, and several other features. We show that public data could be used to collect longitudinal datasets without causing extra stress for the participant, and that these data can adequately reflect longitudinal AD-related changes in vocabulary richness. Our corpus can provide a valuable starting point for the development of early detection tools and enhance our understanding of how AD affects language over time.
2020
Multi-SimLex: A Large-Scale Evaluation of Multilingual and Crosslingual Lexical Semantic Similarity
Ivan Vulić
|
Simon Baker
|
Edoardo Maria Ponti
|
Ulla Petti
|
Ira Leviant
|
Kelly Wing
|
Olga Majewska
|
Eden Bar
|
Matt Malone
|
Thierry Poibeau
|
Roi Reichart
|
Anna Korhonen
Computational Linguistics, Volume 46, Issue 4 - December 2020
We introduce Multi-SimLex, a large-scale lexical resource and evaluation benchmark covering data sets for 12 typologically diverse languages, including major languages (e.g., Mandarin Chinese, Spanish, Russian) as well as less-resourced ones (e.g., Welsh, Kiswahili). Each language data set is annotated for the lexical relation of semantic similarity and contains 1,888 semantically aligned concept pairs, providing a representative coverage of word classes (nouns, verbs, adjectives, adverbs), frequency ranks, similarity intervals, lexical fields, and concreteness levels. Additionally, owing to the alignment of concepts across languages, we provide a suite of 66 crosslingual semantic similarity data sets. Because of its extensive size and language coverage, Multi-SimLex provides entirely novel opportunities for experimental evaluation and analysis. On its monolingual and crosslingual benchmarks, we evaluate and analyze a wide array of recent state-of-the-art monolingual and crosslingual representation models, including static and contextualized word embeddings (such as fastText, monolingual and multilingual BERT, XLM), externally informed lexical representations, as well as fully unsupervised and (weakly) supervised crosslingual word embeddings. We also present a step-by-step data set creation protocol for creating consistent, Multi-Simlex–style resources for additional languages. We make these contributions—the public release of Multi-SimLex data sets, their creation protocol, strong baseline results, and in-depth analyses which can be helpful in guiding future developments in multilingual lexical semantics and representation learning—available via a Web site that will encourage community effort in further expansion of Multi-Simlex to many more languages. Such a large-scale semantic resource could inspire significant further advances in NLP across languages.
Search
Co-authors
- Anna Korhonen 2
- Ivan Vulić 1
- Simon Baker 1
- Edoardo Maria Ponti 1
- Ira Leviant 1
- show all...