Peter Schneider-Kamp
2026
DaLA: Danish Linguistic Acceptability Evaluation Guided by Real World Errors
Gianluca Barmina | Nathalie Carmen Hau Norman | Peter Schneider-Kamp | Lukas Galke Poech
Proceedings of the Fifteenth Language Resources and Evaluation Conference
Gianluca Barmina | Nathalie Carmen Hau Norman | Peter Schneider-Kamp | Lukas Galke Poech
Proceedings of the Fifteenth Language Resources and Evaluation Conference
We present an enhanced benchmark for evaluating linguistic acceptability in Danish. We first analyze the most common errors found in written Danish. Based on this analysis, we introduce a set of fourteen corruption functions that generate incorrect sentences by systematically introducing errors into existing correct Danish sentences. To ensure the accuracy of these corruptions, we assess their validity using both manual and automatic methods. The results are then used as a benchmark for evaluating Large Language Models on a linguistic acceptability judgement task. Our findings demonstrate that this extension is both broader and more comprehensive than the current state of the art. By incorporating a greater variety of corruption types, our benchmark provides a more rigorous assessment of linguistic acceptability, increasing task difficulty, as evidenced by the lower performance of LLMs on our benchmark compared to existing ones. Our results also suggest that our benchmark has a higher discriminatory power which allows to better distinguish well-performing models from low-performing ones.
SommBench: Assessing Sommelier Expertise of Language Models
William Brach | Tomas Bedej | Jacob Nielsen | Jacob Pichna | Juraj Bedej | Eemeli Saarensilta | Julie Dupouy | Gianluca Barmina | Andrea Blasi Núñez | Peter Schneider-Kamp | Kristian Košťál | Michal Ries | Lukas Galke Poech
Proceedings of the Fifteenth Language Resources and Evaluation Conference
William Brach | Tomas Bedej | Jacob Nielsen | Jacob Pichna | Juraj Bedej | Eemeli Saarensilta | Julie Dupouy | Gianluca Barmina | Andrea Blasi Núñez | Peter Schneider-Kamp | Kristian Košťál | Michal Ries | Lukas Galke Poech
Proceedings of the Fifteenth Language Resources and Evaluation Conference
With the rapid advances of large language models, it becomes increasingly important to systematically evaluate their multilingual and multicultural capabilities. Previous cultural evaluation benchmarks focus mainly on basic cultural knowledge that can be encoded in linguistic form. Here, we propose SommBench, a multilingual benchmark to assess sommelier expertise, a domain deeply grounded in the senses of smell and taste. While language models learn about sensory properties exclusively through textual descriptions, SommBench tests whether this textual grounding is sufficient to emulate expert-level sensory judgment. SommBench comprises three main tasks: Wine Theory Question Answering (WTQA), Wine Feature Completion (WFC), and Food-Wine Pairing (FWP). SommBench is available in multiple languages: English, Slovak, Swedish, Finnish, German, Danish, Italian, and Spanish. This helps separate a language model’s wine expertise from its language skills. The benchmark datasets were developed in close collaboration with a professional sommelier and native speakers of the respective languages, resulting in 1,024 questions for wine theory question answering, 1,000 examples for wine feature completion, and 1,000 examples of food-wine pairing. We provide results for the most popular language models, including closed-weights models such as Gemini 2.5, and open-weights models, such as GPT-OSS and Qwen 3. Our results show that the most capable models perform well on wine theory question answering (up to 97% correct with a closed-weights model), yet feature completion (peaking at 65%) and food-wine pairing show (MCC ranging between 0 and 0.39) turn out to be more challenging. These results position SommBench as an interesting and challenging benchmark for evaluating the sommelier expertise of language models. The benchmark is publicly available at https://github.com/sommify/sommbench.
Dynaword: From One-shot to Continuously Developed Datasets
Kenneth Enevoldsen | Kristian Nørgaard Jensen | Jan Kostkan | Balázs Szabó | Márton Kardos | Kirsten Vad | Johan Heinsen | Andrea Blasi Núñez | Gianluca Barmina | Jacob Nielsen | Rasmus Larsen | Rob van der Goot | Peter Vahlstrup | Per Møldrup Dalum | Desmond Elliott | Lukas Galke Poech | Peter Schneider-Kamp | Kristoffer Nielbo
Proceedings of the Fifteenth Language Resources and Evaluation Conference
Kenneth Enevoldsen | Kristian Nørgaard Jensen | Jan Kostkan | Balázs Szabó | Márton Kardos | Kirsten Vad | Johan Heinsen | Andrea Blasi Núñez | Gianluca Barmina | Jacob Nielsen | Rasmus Larsen | Rob van der Goot | Peter Vahlstrup | Per Møldrup Dalum | Desmond Elliott | Lukas Galke Poech | Peter Schneider-Kamp | Kristoffer Nielbo
Proceedings of the Fifteenth Language Resources and Evaluation Conference
Large-scale datasets are foundational for research and development in natural language processing. However, current approaches face three key challenges: (1) reliance on ambiguously licensed sources restricting use, sharing, and derivative works; (2) static dataset releases that prevent community contributions and diminish longevity; and (3) quality assurance processes restricted to publishing teams rather than leveraging community expertise. To address these limitations, we introduce two contributions: the Dynaword approach and Danish Dynaword. The Dynaword approach is a framework for creating large-scale, open datasets that can be continuously updated through community collaboration. Danish Dynaword is a concrete implementation that validates this approach and demonstrates its potential. Danish Dynaword contains over five times as many tokens as comparable releases, is exclusively openly licensed, and has received multiple contributions across industry, the public sector and research institutions. The repository includes light-weight tests to ensure data formatting, quality, and documentation, establishing a sustainable framework for ongoing community contributions and dataset evolution.
2025
Encoder vs Decoder: Comparative Analysis of Encoder and Decoder Language Models on Multilingual NLU Tasks
Dan Saattrup Nielsen | Kenneth Enevoldsen | Peter Schneider-Kamp
Proceedings of the Joint 25th Nordic Conference on Computational Linguistics and 11th Baltic Conference on Human Language Technologies (NoDaLiDa/Baltic-HLT 2025)
Dan Saattrup Nielsen | Kenneth Enevoldsen | Peter Schneider-Kamp
Proceedings of the Joint 25th Nordic Conference on Computational Linguistics and 11th Baltic Conference on Human Language Technologies (NoDaLiDa/Baltic-HLT 2025)
This paper explores the performance of encoder and decoder language models on multilingual Natural Language Understanding (NLU) tasks, with a broad focus on Germanic languages. Building upon the ScandEval benchmark, initially restricted to evaluating encoder models, we extend the evaluation framework to include decoder models. We introduce a method for evaluating decoder models on NLU tasks and apply it to the languages Danish, Swedish, Norwegian, Icelandic, Faroese, German, Dutch, and English. Through a series of experiments and analyses, we also address research questions regarding the comparative performance of encoder and decoder models, the impact of NLU task types, and the variation across language resources. Our findings reveal that encoder models can achieve significantly better NLU performance than decoder models despite having orders of magnitude fewer parameters. Additionally, we investigate the correlation between decoders and task performance via a UMAP analysis, shedding light on the unique capabilities of decoder and encoder models. This study contributes to a deeper understanding of language model paradigms in NLU tasks and provides valuable insights for model selection and evaluation in multilingual settings.
MLDataForge: Accelerating Large-Scale Dataset Preprocessing and Access for Multimodal Foundation Model Training
Andrea Blasi Núñez | Lukas Paul Achatius Galke | Peter Schneider-Kamp
Proceedings of the 15th International Conference on Recent Advances in Natural Language Processing - Natural Language Processing in the Generative AI Era
Andrea Blasi Núñez | Lukas Paul Achatius Galke | Peter Schneider-Kamp
Proceedings of the 15th International Conference on Recent Advances in Natural Language Processing - Natural Language Processing in the Generative AI Era
Preprocessing large and possibly multimodal datasets remains a key bottleneck in many machine learning workflows, particularly when random access to samples is needed for global shuffling and sorting. Existing approaches, including widely used formats like JSONL and frameworks such as Huggingface Datasets and MosaicML Streaming, typically incur substantial computational, memory, and storage overhead in such settings. Here, we introduce MLDataForge, a Python-based open-source framework designed for scalable dataset pre-processing and access. Our key contributions are: (1) optimized readers for Mosaic Data Shards (MDS) that substantially improve throughput, reduce peak storage usage, and support sample-level compression; (2) JINX (JSON Indexed ’N’ eXtended), a novel, index-augmented JSONL-compatible format supporting structured footers and binary sidecar files; and (3) a lazy-loading mechanism that defers data loading, decompression, and decoding JINX files until sample fields are accessed. We empirically evaluate MLDataForge and our contributions on a representative 200 GB supervised fine-tuning dataset for vision language models. Our best configuration – zstd-compressed JINX with binary sidecar and lazy loading – yields at least a decimal order-of-magnitude throughput increase compared to the best baselines for iteration, global shuffling, and sorting. These advances enable substantial gains in data preprocessing performance, facilitating more scalable and resource-efficient model training pipelines.
Continual Quantization-Aware Pre-Training: When to transition from 16-bit to 1.58-bit pre-training for BitNet language models?
Jacob Nielsen | Peter Schneider-Kamp | Lukas Galke
Findings of the Association for Computational Linguistics: ACL 2025
Jacob Nielsen | Peter Schneider-Kamp | Lukas Galke
Findings of the Association for Computational Linguistics: ACL 2025
Large language models (LLMs) require immense resources for training and inference. Quantization, a technique that reduces the precision of model parameters, offers a promising solution for improving LLM efficiency and sustainability. While post-training quantization methods typically achieve 4-8 bits per parameter, recent research suggests that training LLMs with 1.58 bits per weight parameter from scratch can maintain model accuracy while greatly reducing memory requirements and energy consumption at inference time. Here, we investigate a training strategy for quantization-aware pre-training, where the models are first trained with 16-bit precision and then transition into 1.58-bit quantization-aware training. Our results on 11 downstream tasks, show that this 16-to-1.58-bit training strategy is preferable over full 1.58-bit training and leaves models closer to those which have undergone 16-bit training. We further investigate the effects of retaining the optimizer state at the transition point and gradually phasing in quantization strength - finding that both techniques alleviate the magnitude of loss spikes, but also that these effects can be compensated through further training.
2022
Multi-sense Language Modelling
Andrea Lekkas | Peter Schneider-Kamp | Isabelle Augenstein
Proceedings of the Workshop on Dimensions of Meaning: Distributional and Curated Semantics (DistCurate 2022)
Andrea Lekkas | Peter Schneider-Kamp | Isabelle Augenstein
Proceedings of the Workshop on Dimensions of Meaning: Distributional and Curated Semantics (DistCurate 2022)
The effectiveness of a language model is influenced by its token representations, which must encode contextual information and handle the same word form having a plurality of meanings (polysemy). Currently, none of the common language modelling architectures explicitly model polysemy. We propose a language model which not only predicts the next word, but also its sense in context. We argue that this higher prediction granularity may be useful for end tasks such as assistive writing, and allow for more a precise linking of language models with knowledge bases. We find that multi-sense language modelling requires architectures that go beyond standard language models, and here propose a localized prediction framework that decomposes the task into a word followed by a sense prediction task. To aid sense prediction, we utilise a Graph Attention Network, which encodes definitions and example uses of word senses. Overall, we find that multi-sense language modelling is a highly challenging task, and suggest that future work focus on the creation of more annotated training datasets.
Search
Fix author
Co-authors
- Lukas Galke Poech 5
- Gianluca Barmina 3
- Jacob Nielsen 3
- Andrea Blasi Núñez 3
- Kenneth Enevoldsen 2
- Isabelle Augenstein 1
- Tomas Bedej 1
- Juraj Bedej 1
- William Brach 1
- Per Møldrup Dalum 1
- Julie Dupouy 1
- Desmond Elliott 1
- Rob Van Der Goot 1
- Johan Heinsen 1
- Kristian Nørgaard Jensen 1
- Márton Kardos 1
- Jan Kostkan 1
- Kristian Košťál 1
- Rasmus Larsen 1
- Andrea Lekkas 1
- Kristoffer Nielbo 1
- Dan Saattrup Nielsen 1
- Nathalie Carmen Hau Norman 1
- Jacob Pichna 1
- Michal Ries 1
- Eemeli Saarensilta 1
- Balázs Szabó 1
- Kirsten Vad 1
- Peter Vahlstrup 1