Katharina Stein


2023

pdf
From Sentence to Action: Splitting AMR Graphs for Recipe Instructions
Katharina Stein | Lucia Donatelli | Alexander Koller
Proceedings of the Fourth International Workshop on Designing Meaning Representations

Accurately interpreting the relationships between actions in a recipe text is essential to successful recipe completion. We explore using Abstract Meaning Representation (AMR) to represent recipe instructions, abstracting away from syntax and sentence structure that may order recipe actions in arbitrary ways. We present an algorithm to split sentence-level AMRs into action-level AMRs for individual cooking steps. Our approach provides an automatic way to derive fine-grained AMR representations of actions in cooking recipes and can be a useful tool for downstream, instructional tasks.

2021

pdf
Representing Implicit Positive Meaning of Negated Statements in AMR
Katharina Stein | Lucia Donatelli
Proceedings of the Joint 15th Linguistic Annotation Workshop (LAW) and 3rd Designing Meaning Representations (DMR) Workshop

Abstract Meaning Representation (AMR) has become popular for representing the meaning of natural language in graph structures. However, AMR does not represent scope information, posing a problem for its overall expressivity and specifically for drawing inferences from negated statements. This is the case with so-called “positive interpretations” of negated statements, in which implicit positive meaning is identified by inferring the opposite of the negation’s focus. In this work, we investigate how potential positive interpretations (PPIs) can be represented in AMR. We propose a logically motivated AMR structure for PPIs that makes the focus of negation explicit and sketch an initial proposal for a systematic methodology to generate this more expressive structure.

pdf
SHAPELURN: An Interactive Language Learning Game with Logical Inference
Katharina Stein | Leonie Harter | Luisa Geiger
Proceedings of the First Workshop on Interactive Learning for Natural Language Processing

We investigate if a model can learn natural language with minimal linguistic input through interaction. Addressing this question, we design and implement an interactive language learning game that learns logical semantic representations compositionally. Our game allows us to explore the benefits of logical inference for natural language learning. Evaluation shows that the model can accurately narrow down potential logical representations for words over the course of the game, suggesting that our model is able to learn lexical mappings from scratch successfully.