Florian Ertz
2026
GePaDeSE: A New Resource for Clause-Level Aspect in German Parliamentary Debates
Julian Schlenker | Ines Rehbein | Lilly Brauner | Florian Ertz | Ines Reinig | Simone Paolo Ponzetto
Proceedings of the Fifteenth Language Resources and Evaluation Conference
Julian Schlenker | Ines Rehbein | Lilly Brauner | Florian Ertz | Ines Reinig | Simone Paolo Ponzetto
Proceedings of the Fifteenth Language Resources and Evaluation Conference
This paper presents GePaDeSE, a new resource with annotations of clause-level aspect in German parliamentary debates, also known as Situation Entity types. The new resource includes 250 political speeches from the German Bundestag, given by 192 speakers, with over 220,000 tokens. In the paper, we first describe the new corpus and the annotation process. Then we present experiments on automatically classifying clause-level aspect and present an in-depth analysis where we show the potential of Situation Entities for the analysis of political discourse.
2025
Moral reckoning: How reliable are dictionary-based methods for examining morality in text?
Ines Rehbein | Lilly Brauner | Florian Ertz | Ines Reinig | Simone Ponzetto
Proceedings of the 5th International Conference on Natural Language Processing for Digital Humanities
Ines Rehbein | Lilly Brauner | Florian Ertz | Ines Reinig | Simone Ponzetto
Proceedings of the 5th International Conference on Natural Language Processing for Digital Humanities
Due to their availability and ease of use, dictionary-based measures of moral values are a popular tool for text-based analyses of morality that examine human attitudes and behaviour across populations and cultures. In this paper, we revisit the construct validity of different dictionary-based measures of morality in text that have been proposed in the literature. We discuss conceptual challenges for text-based measures of morality and present an annotation experiment where we create a new dataset with human annotations of moral rhetoric in German political manifestos. We compare the results of our human annotations with different measures of moral values, showing that none of them is able to capture the trends observed by trained human coders. Our findings have far-reaching implications for the application of moral dictionaries in the digital humanities.