Jesse Dunietz


To Test Machine Comprehension, Start by Defining Comprehension
Jesse Dunietz | Greg Burnham | Akash Bharadwaj | Owen Rambow | Jennifer Chu-Carroll | Dave Ferrucci
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics

Many tasks aim to measure machine reading comprehension (MRC), often focusing on question types presumed to be difficult. Rarely, however, do task designers start by considering what systems should in fact comprehend. In this paper we make two key contributions. First, we argue that existing approaches do not adequately define comprehension; they are too unsystematic about what content is tested. Second, we present a detailed definition of comprehension—a “Template of Understanding”—for a widely useful class of texts, namely short narratives. We then conduct an experiment that strongly suggests existing systems are not up to the task of narrative understanding as we define it.


DeepCx: A transition-based approach for shallow semantic parsing with complex constructional triggers
Jesse Dunietz | Jaime Carbonell | Lori Levin
Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing

This paper introduces the surface construction labeling (SCL) task, which expands the coverage of Shallow Semantic Parsing (SSP) to include frames triggered by complex constructions. We present DeepCx, a neural, transition-based system for SCL. As a test case for the approach, we apply DeepCx to the task of tagging causal language in English, which relies on a wider variety of constructions than are typically addressed in SSP. We report substantial improvements over previous tagging efforts on a causal language dataset. We also propose ways DeepCx could be extended to still more difficult constructions and to other semantic domains once appropriate datasets become available.


Automatically Tagging Constructions of Causation and Their Slot-Fillers
Jesse Dunietz | Lori Levin | Jaime Carbonell
Transactions of the Association for Computational Linguistics, Volume 5

This paper explores extending shallow semantic parsing beyond lexical-unit triggers, using causal relations as a test case. Semantic parsing becomes difficult in the face of the wide variety of linguistic realizations that causation can take on. We therefore base our approach on the concept of constructions from the linguistic paradigm known as Construction Grammar (CxG). In CxG, a construction is a form/function pairing that can rely on arbitrary linguistic and semantic features. Rather than codifying all aspects of each construction’s form, as some attempts to employ CxG in NLP have done, we propose methods that offload that problem to machine learning. We describe two supervised approaches for tagging causal constructions and their arguments. Both approaches combine automatically induced pattern-matching rules with statistical classifiers that learn the subtler parameters of the constructions. Our results show that these approaches are promising: they significantly outperform naïve baselines for both construction recognition and cause and effect head matches.

The BECauSE Corpus 2.0: Annotating Causality and Overlapping Relations
Jesse Dunietz | Lori Levin | Jaime Carbonell
Proceedings of the 11th Linguistic Annotation Workshop

Language of cause and effect captures an essential component of the semantics of a text. However, causal language is also intertwined with other semantic relations, such as temporal precedence and correlation. This makes it difficult to determine when causation is the primary intended meaning. This paper presents BECauSE 2.0, a new version of the BECauSE corpus with exhaustively annotated expressions of causal language, but also seven semantic relations that are frequently co-present with causation. The new corpus shows high inter-annotator agreement, and yields insights both about the linguistic expressions of causation and about the process of annotating co-present semantic relations.


Annotating Causal Language Using Corpus Lexicography of Constructions
Jesse Dunietz | Lori Levin | Jaime Carbonell
Proceedings of the 9th Linguistic Annotation Workshop


A New Entity Salience Task with Millions of Training Examples
Jesse Dunietz | Daniel Gillick
Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics, volume 2: Short Papers


The Effects of Lexical Resource Quality on Preference Violation Detection
Jesse Dunietz | Lori Levin | Jaime Carbonell
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)