Freddy Wetjen


2025

pdf bib
The Impact of Copyrighted Material on Large Language Models: A Norwegian Perspective
Javier de la Rosa | Vladislav Mikhailov | Lemei Zhang | Freddy Wetjen | David Samuel | Peng Liu | Rolv-Arild Braaten | Petter Mæhlum | Magnus Breder Birkenes | Andrey Kutuzov | Tita Enstad | Hans Christian Farsethås | Svein Arne Brygfjeld | Jon Atle Gulla | Stephan Oepen | Erik Velldal | Wilfred Østgulen | Lilja Øvrelid | Aslak Sira Myhre
Proceedings of the Joint 25th Nordic Conference on Computational Linguistics and 11th Baltic Conference on Human Language Technologies (NoDaLiDa/Baltic-HLT 2025)

The use of copyrighted materials in training language models raises critical legal and ethical questions. This paper presents a framework for and the results of empirically assessing the impact of publisher-controlled copyrighted corpora on the performance of generative large language models (LLMs) for Norwegian. When evaluated on a diverse set of tasks, we found that adding both books and newspapers to the data mixture of LLMs tend to improve their performance, while the addition of fiction works seems to be detrimental. Our experiments could inform the creation of a compensation scheme for authors whose works contribute to AI development.

2023

pdf bib
Boosting Norwegian Automatic Speech Recognition
Javier De La Rosa | Rolv-Arild Braaten | Per Kummervold | Freddy Wetjen
Proceedings of the 24th Nordic Conference on Computational Linguistics (NoDaLiDa)

In this paper, we present several baselines for automatic speech recognition (ASR) models for the two official written languages in Norway: Bokmål and Nynorsk. We compare the performance of models of varying sizes and pre-training approaches on multiple Norwegian speech datasets. Additionally, we measure the performance of these models against previous state-of-the-art ASR models, as well as on out-of-domain datasets. We improve the state of the art on the Norwegian Parliamentary Speech Corpus (NPSC) from a word error rate (WER) of 17.10% to 7.60%, with models achieving 5.81% for Bokmål and 11.54% for Nynorsk. We also discuss the challenges and potential solutions for further improving ASR models for Norwegian.

pdf bib
A Large Norwegian Dataset for Weak Supervision ASR
Per Erik Solberg | Pierre Beauguitte | Per Egil Kummervold | Freddy Wetjen
Proceedings of the Second Workshop on Resources and Representations for Under-Resourced Languages and Domains (RESOURCEFUL-2023)

With the advent of weakly supervised ASR systems like Whisper, it is possible to train ASR systems on non-verbatim transcriptions. This paper describes an effort to create a large Norwegian dataset for weakly supervised ASR from parliamentary recordings. Audio from Stortinget, the Norwegian parliament, is segmented and transcribed with an existing ASR system. An algorithm retrieves transcripts of these segments from Stortinget’s official proceedings using the Levenshtein edit distance between the ASR output and the proceedings text. In that way, a dataset of more than 5000 hours of transcribed speech is produced with limited human effort. Since parliamentary data is public domain, the dataset can be shared freely without any restrictions.

2022

pdf bib
The Norwegian Colossal Corpus: A Text Corpus for Training Large Norwegian Language Models
Per Kummervold | Freddy Wetjen | Javier de la Rosa
Proceedings of the Thirteenth Language Resources and Evaluation Conference

Norwegian has been one of many languages lacking sufficient available text to train quality language models. In an attempt to bridge this gap, we introduce the Norwegian Colossal Corpus (NCC), which comprises 49GB of clean Norwegian textual data containing over 7B words. The NCC is composed of different and varied sources, ranging from books and newspapers to government documents and public reports, showcasing the various uses of the Norwegian language in society. The corpus contains mainly Norwegian Bokmål and Norwegian Nynorsk. Each document in the corpus is tagged with metadata that enables the creation of sub-corpora for specific needs. Its structure makes it easy to combine with large web archives that for licensing reasons could not be distributed together with the NCC. By releasing this corpus openly to the public, we hope to foster the creation of both better Norwegian language models and multilingual language models with support for Norwegian.

2021

pdf bib
Operationalizing a National Digital Library: The Case for a Norwegian Transformer Model
Per E Kummervold | Javier De la Rosa | Freddy Wetjen | Svein Arne Brygfjeld
Proceedings of the 23rd Nordic Conference on Computational Linguistics (NoDaLiDa)

In this work, we show the process of building a large-scale training set from digital and digitized collections at a national library. The resulting Bidirectional Encoder Representations from Transformers (BERT)-based language model for Norwegian outperforms multilingual BERT (mBERT) models in several token and sequence classification tasks for both Norwegian Bokmål and Norwegian Nynorsk. Our model also improves the mBERT performance for other languages present in the corpus such as English, Swedish, and Danish. For languages not included in the corpus, the weights degrade moderately while keeping strong multilingual properties. Therefore, we show that building high-quality models within a memory institution using somewhat noisy optical character recognition (OCR) content is feasible, and we hope to pave the way for other memory institutions to follow.