Truls Pedersen


2026

We introduce TryggLLM, the first safety benchmark dataset for Norwegian. The dataset is intended for benchmarking different types of safety issues that can occur when using Norwegian generative language models. We have manually translated two English benchmark datasets, while modifying the content to be aligned with the Norwegian context. The benchmark dataset is composed of two sub-parts: i) prompts annotated by four native speakers, in both the written variants of Norwegian Bokmål (BM) and Nynorsk (NN), such that each native speaker wrote in their preferred variants (two BM and two NN); ii) prompts and target responses, where each of them has a BM and a NN version. We provide detailed descriptions of the data creation process. We also present a thorough manual evaluation of benchmarking existing open Norwegian LLMs using TryggLLM. Our results show that between 18% and 48% of the generated responses are unsafe, across all tested models.

2018

Automatically identifying persons in a particular role within a large corpus can be a difficult task, especially if you don’t know who you are actually looking for. Resources compiling names of persons can be available, but no exhaustive lists exist. However, such lists usually contain known names that are “visible” in the national public sphere, and tend to ignore the marginal and international ones. In this article we propose a method for automatically generating suggestions of names found in a corpus of Norwegian news articles, and which “naturally” belong to a given initial list of members, and that were not known (compiled in a list) beforehand. The approach is based, in part, on the assumption that surface level syntactic features reveal parts of the underlying semantic content and can help uncover the structure of the language.