Julian Schlenker
2026
Appeal, Align, Divide? Stance Detection for Group-Directed Messages in German Parliamentary Debates
Ines Rehbein | Maris Leander Buttmann | Julian Schlenker | Simone Paolo Ponzetto
Proceedings of the Fifteenth Language Resources and Evaluation Conference
Ines Rehbein | Maris Leander Buttmann | Julian Schlenker | Simone Paolo Ponzetto
Proceedings of the Fifteenth Language Resources and Evaluation Conference
This paper presents a new benchmark for detecting group-based appeals, i.e., positive or negative references towards social groups, in German parliamentary debates. In the first step, group mentions are identified as targets for stance detection. In the next step, three human annotators assign stance labels to the group mentions, coding the speaker’s perspective towards the specific group. The created benchmark data is then used to investigate the capacity of Large Language Models (LLMs) for detecting polticians’ stances towards social groups. We explore the potential of different prompting strategies (zero-shot prompting, few-shot prompting, Chain-of-Thought) for this task and compare the results to a supervised BERT baseline, showing that in low-resource scenarios LLMs can outperform smaller fine-tuned models without the need for annotating large datasets.
GePaDeSE: A New Resource for Clause-Level Aspect in German Parliamentary Debates
Julian Schlenker | Ines Rehbein | Lilly Brauner | Florian Ertz | Ines Reinig | Simone Paolo Ponzetto
Proceedings of the Fifteenth Language Resources and Evaluation Conference
Julian Schlenker | Ines Rehbein | Lilly Brauner | Florian Ertz | Ines Reinig | Simone Paolo Ponzetto
Proceedings of the Fifteenth Language Resources and Evaluation Conference
This paper presents GePaDeSE, a new resource with annotations of clause-level aspect in German parliamentary debates, also known as Situation Entity types. The new resource includes 250 political speeches from the German Bundestag, given by 192 speakers, with over 220,000 tokens. In the paper, we first describe the new corpus and the annotation process. Then we present experiments on automatically classifying clause-level aspect and present an in-depth analysis where we show the potential of Situation Entities for the analysis of political discourse.
GePaDeU - a Multi-layer Corpus of German Parliamentary Debates with Rich Semantic and Pragmatic Annotations
Ines Rehbein | Julian Schlenker | Lars Ostertag | Simone Paolo Ponzetto
Proceedings of the Fifteenth Language Resources and Evaluation Conference
Ines Rehbein | Julian Schlenker | Lars Ostertag | Simone Paolo Ponzetto
Proceedings of the Fifteenth Language Resources and Evaluation Conference
This paper presents GePaDeU, a new manually annotated corpus of German Parliamentary Debates with Unified layers of semantic and pragmatic information. The data includes parliamentary speeches from the German Bundestag, ranging over a time period from 2017–2021, with 267 speeches given by 197 members of parliament. The final release of our corpus unifies multiple annotation layers, including entity-level annotations, the annotation of speech events and their corresponding speakers, functional speech acts, clause-level aspect, and moral framing. We provide an overview of the various annotation layers and illustrate how the semantic and pragmatic annotations can be combined for corpus-linguistic studies and discourse analyses, and to answer research questions in the field of political science. The new resource will be made freely available for the research community.
2025
Only for the Unseen Languages, Say the Llamas: On the Efficacy of Language Adapters for Cross-lingual Transfer in English-centric LLMs
Julian Schlenker | Jenny Kunz | Tatiana Anikina | Günter Neumann | Simon Ostermann
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 4: Student Research Workshop)
Julian Schlenker | Jenny Kunz | Tatiana Anikina | Günter Neumann | Simon Ostermann
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 4: Student Research Workshop)
Most state-of-the-art large language models (LLMs) are trained mainly on English data, limiting their effectiveness on non-English, especially low-resource, languages. This study investigates whether language adapters can facilitate cross-lingual transfer in English-centric LLMs. We train language adapters for 13 languages using Llama 2 (7B) and Llama 3.1 (8B) as base models, and evaluate their effectiveness on two downstream tasks (MLQA and SIB-200) using either task adapters or in-context learning. Our results reveal that language adapters improve performance for languages not seen during pretraining, but provide negligible benefit for seen languages. These findings highlight the limitations of language adapters as a general solution for multilingual adaptation in English-centric LLMs.