Magdalena Markowska


2023

pdf
Finding Common Ground: Annotating and Predicting Common Ground in Spoken Conversations
Magdalena Markowska | Mohammad Taghizadeh | Adil Soubki | Seyed Mirroshandel | Owen Rambow
Findings of the Association for Computational Linguistics: EMNLP 2023

When we communicate with other humans, we do not simply generate a sequence of words. Rather, we use our cognitive state (beliefs, desires, intentions) and our model of the audience’s cognitive state to create utterances that affect the audience’s cognitive state in the intended manner. An important part of cognitive state is the common ground, which is the content the speaker believes, and the speaker believes the audience believes, and so on. While much attention has been paid to common ground in cognitive science, there has not been much work in natural language processing. In this paper, we introduce a new annotation and corpus to capture common ground. We then describe some initial experiments extracting propositions from dialog and tracking their status in the common ground from the perspective of each speaker.

2022

pdf
BeSt: The Belief and Sentiment Corpus
Jennifer Tracey | Owen Rambow | Claire Cardie | Adam Dalton | Hoa Trang Dang | Mona Diab | Bonnie Dorr | Louise Guthrie | Magdalena Markowska | Smaranda Muresan | Vinodkumar Prabhakaran | Samira Shaikh | Tomek Strzalkowski
Proceedings of the Thirteenth Language Resources and Evaluation Conference

We present the BeSt corpus, which records cognitive state: who believes what (i.e., factuality), and who has what sentiment towards what. This corpus is inspired by similar source-and-target corpora, specifically MPQA and FactBank. The corpus comprises two genres, newswire and discussion forums, in three languages, Chinese (Mandarin), English, and Spanish. The corpus is distributed through the LDC.

pdf
Re-Examining FactBank: Predicting the Author’s Presentation of Factuality
John Murzaku | Peter Zeng | Magdalena Markowska | Owen Rambow
Proceedings of the 29th International Conference on Computational Linguistics

We present a corrected version of a subset of the FactBank data set. Previously published results on FactBank are no longer valid. We perform experiments on FactBank using multiple training paradigms, data smoothing techniques, and polarity classifiers. We argue that f-measure is an important alternative evaluation metric for factuality. We provide new state-of-the-art results for four corpora including FactBank. We perform an error analysis on Factbank combined with two similar corpora.

pdf
SIGMORPHONUniMorph 2022 Shared Task 0: Generalization and Typologically Diverse Morphological Inflection
Jordan Kodner | Salam Khalifa | Khuyagbaatar Batsuren | Hossep Dolatian | Ryan Cotterell | Faruk Akkus | Antonios Anastasopoulos | Taras Andrushko | Aryaman Arora | Nona Atanalov | Gábor Bella | Elena Budianskaya | Yustinus Ghanggo Ate | Omer Goldman | David Guriel | Simon Guriel | Silvia Guriel-Agiashvili | Witold Kieraś | Andrew Krizhanovsky | Natalia Krizhanovsky | Igor Marchenko | Magdalena Markowska | Polina Mashkovtseva | Maria Nepomniashchaya | Daria Rodionova | Karina Scheifer | Alexandra Sorova | Anastasia Yemelina | Jeremiah Young | Ekaterina Vylomova
Proceedings of the 19th SIGMORPHON Workshop on Computational Research in Phonetics, Phonology, and Morphology

The 2022 SIGMORPHON–UniMorph shared task on large scale morphological inflection generation included a wide range of typologically diverse languages: 33 languages from 11 top-level language families: Arabic (Modern Standard), Assamese, Braj, Chukchi, Eastern Armenian, Evenki, Georgian, Gothic, Gujarati, Hebrew, Hungarian, Itelmen, Karelian, Kazakh, Ket, Khalkha Mongolian, Kholosi, Korean, Lamahalot, Low German, Ludic, Magahi, Middle Low German, Old English, Old High German, Old Norse, Polish, Pomak, Slovak, Turkish, Upper Sorbian, Veps, and Xibe. We emphasize generalization along different dimensions this year by evaluating test items with unseen lemmas and unseen features separately under small and large training conditions. Across the five submitted systems and two baselines, the prediction of inflections with unseen features proved challenging, with average performance decreased substantially from last year. This was true even for languages for which the forms were in principle predictable, which suggests that further work is needed in designing systems that capture the various types of generalization required for the world’s languages.

2021

pdf bib
A Network Science Approach to Bilingual Code-switching
Qihui Xu | Magdalena Markowska | Martin Chodorow | Ping Li
Proceedings of the Society for Computation in Linguistics 2021

pdf
Finite-state Model of Shupamem Reduplication
Magdalena Markowska | Jeffrey Heinz | Owen Rambow
Proceedings of the 18th SIGMORPHON Workshop on Computational Research in Phonetics, Phonology, and Morphology

Shupamem, a language of Western Cameroon, is a tonal language which also exhibits the morpho-phonological process of full reduplication. This creates two challenges for finite-state model of its morpho-syntax and morphophonology: how to manage the full reduplication and the autosegmental nature of lexical tone. Dolatian and Heinz (2020) explain how 2-way finite-state transducers can model full reduplication without an exponential increase in states, and finite-state transducers with multiple tapes have been used to model autosegmental tiers, including tone (Wiebe, 1992; Dolatian and Rawski, 2020a). Here we synthesize 2-way finite-state transducers and multitape transducers, resulting in a finite-state formalism that subsumes both, to account for the full reduplicative processes in Shupamem which also affect tone.

2019

pdf
Weird Inflects but OK: Making Sense of Morphological Generation Errors
Kyle Gorman | Arya D. McCarthy | Ryan Cotterell | Ekaterina Vylomova | Miikka Silfverberg | Magdalena Markowska
Proceedings of the 23rd Conference on Computational Natural Language Learning (CoNLL)

We conduct a manual error analysis of the CoNLL-SIGMORPHON Shared Task on Morphological Reinflection. This task involves natural language generation: systems are given a word in citation form (e.g., hug) and asked to produce the corresponding inflected form (e.g., the simple past hugged). We propose an error taxonomy and use it to annotate errors made by the top two systems across twelve languages. Many of the observed errors are related to inflectional patterns sensitive to inherent linguistic properties such as animacy or affect; many others are failures to predict truly unpredictable inflectional behaviors. We also find nearly one quarter of the residual “errors” reflect errors in the gold data.