Sven Naber
2026
Probing Discrete Speech Tokens of Spoken Language Models
Sven Naber | Julia Koch | Pranav Singh | Alberto Saponaro | Ioanna Karagianni | Ngoc Thang Vu
Proceedings of the Fifteenth Language Resources and Evaluation Conference
Sven Naber | Julia Koch | Pranav Singh | Alberto Saponaro | Ioanna Karagianni | Ngoc Thang Vu
Proceedings of the Fifteenth Language Resources and Evaluation Conference
This paper presents a framework for systematic probing of discrete speech token representations in spoken language models (SLMs). We propose three complementary components: a distributional divergence analysis testing whether an attribute is reflected in token usage, token-based classifiers to quantify recoverability and an attribute-conditioned representation analysis revealing phonetic attribute realizations. As a demonstration we apply these probes to tokenizer outputs and model generations from CosyVoice2 and SparkTTS on LibriTTS-R and VCTK. We find that gender is encoded in their respective tokens but in different forms - the signal is more stable across stages and datasets in CosyVoice2, whereas SparkTTS shows weaker cross-stage consistency and stronger pause/prosody-related effects. Exploratory probes of valence, arousal, and dominance are weaker and less consistent. These results show that discrete speech tokens retain speaker-related information in different ways across architectures and that the proposed framework provides an interpretable basis for comparing token representations across spoken language modeling pipelines.
2025
Evaluating Textual and Visual Semantic Neighborhoods of Abstract and Concrete Concepts
Sven Naber | Diego Frassinelli | Sabine Schulte Im Walde
Proceedings of the 14th Joint Conference on Lexical and Computational Semantics (*SEM 2025)
Sven Naber | Diego Frassinelli | Sabine Schulte Im Walde
Proceedings of the 14th Joint Conference on Lexical and Computational Semantics (*SEM 2025)
This paper presents a systematic evaluation of nearest neighbors across semantic representation spaces in both textual and visual modalities. We focus on nominal concepts with varying concreteness levels, and apply a neighborhood overlap measure to compare these target concepts differing in their linguistic and perceptual nature. We find that alignment is primarily determined by modality, and additionally by level of concreteness: Models from the same modality show stronger alignment than cross-modal models, and spaces of concrete concepts show stronger alignment than those of abstract ones. Overall, larger neighborhood size strengthens the alignment between spaces.