Mark Steedman

Also published as: M. Steedman


2024

pdf
Human Temporal Inferences Go Beyond Aspectual Class
Katarzyna Pruś | Mark Steedman | Adam Lopez
Proceedings of the 18th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long Papers)

Past work in NLP has proposed the task of classifying English verb phrases into situation aspect categories, assuming that these categories play an important role in tasks requiring temporal reasoning. We investigate this assumption by gathering crowd-sourced judgements about aspectual entailments from non-expert, native English participants. The results suggest that aspectual class alone is not sufficient to explain the response patterns of the participants. We propose that looking at scenarios which can feasibly accompany an action description contributes towards a better explanation of the participants’ answers. A further experiment using GPT-3.5 shows that its outputs follow different patterns than human answers, suggesting that such conceivable scenarios cannot be fully accounted for in the language alone. We release our dataset to support further research.

pdf bib
Evaluating Chinese Large Language Models on Discipline Knowledge Acquisition via Memorization and Robustness Assessment
Chuang Liu | Renren Jin | Mark Steedman | Deyi Xiong
Proceedings of the 1st Workshop on Data Contamination (CONDA)

Chinese LLMs demonstrate impressive performance on NLP tasks, particularly on discipline knowledge benchmarks, with some results approaching those of GPT-4. Previous research has viewed these advancements as potential outcomes of data contamination or leakage, prompting efforts to create new detection methods and address evaluation issues in LLM benchmarks. However, there has been a lack of comprehensive assessment of the evolution of Chinese LLMs. To address this gap, this paper offers a thorough investigation of Chinese LLMs on discipline knowledge evaluation, delving into the advancements of various LLMs, including a group of related models and others. Specifically, we have conducted six assessments ranging from knowledge memorization to comprehension for robustness, encompassing tasks like predicting incomplete questions and options, identifying behaviors by the contaminational fine-tuning, and answering rephrased questions. Experimental findings indicate a positive correlation between the release time of LLMs and their memorization capabilities, but they struggle with variations in original question-options pairs. Additionally, our findings suggest that question descriptions have a more significant impact on LLMs’ performance.

2023

pdf
Extrinsic Evaluation of Machine Translation Metrics
Nikita Moghe | Tom Sherborne | Mark Steedman | Alexandra Birch
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

Automatic machine translation (MT) metrics are widely used to distinguish the quality of machine translation systems across relatively large test sets (system-level evaluation). However, it is unclear if automatic metrics are reliable at distinguishing good translations from bad translations at the sentence level (segment-level evaluation). In this paper, we investigate how useful MT metrics are at detecting the segment-level quality by correlating metrics with how useful the translations are for downstream task. We evaluate the segment-level performance of the most widely used MT metrics (chrF, COMET, BERTScore, etc.) on three downstream cross-lingual tasks (dialogue state tracking, question answering, and semantic parsing). For each task, we only have access to a monolingual task-specific model and a translation model. We calculate the correlation between the metric’s ability to predict a good/bad translation with the success/failure on the final task for the machine translated test sentences. Our experiments demonstrate that all metrics exhibit negligible correlation with the extrinsic evaluation of the downstream outcomes. We also find that the scores provided by neural metrics are not interpretable, in large part due to having undefined ranges. We synthesise our analysis into recommendations for future MT metrics to produce labels rather than scores for more informative interaction between machine translation and multilingual language understanding.

pdf
Multi-Document Summarization with Centroid-Based Pretraining
Ratish Surendran Puduppully | Parag Jain | Nancy Chen | Mark Steedman
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)

In Multi-Document Summarization (MDS), the input can be modeled as a set of documents, and the output is its summary. In this paper, we focus on pretraining objectives for MDS. Specifically, we introduce a novel pretraining objective, which involves selecting the ROUGE-based centroid of each document cluster as a proxy for its summary. Our objective thus does not require human written summaries and can be utilized for pretraining on a dataset consisting solely of document sets. Through zero-shot, few-shot, and fully supervised experiments on multiple MDS datasets, we show that our model Centrum is better or comparable to a state-of-the-art model. We make the pretrained and fine-tuned models freely available to the research community https://github.com/ratishsp/centrum.

pdf
Align-then-Enhance: Multilingual Entailment Graph Enhancement with Soft Predicate Alignment
Yuting Wu | Yutong Hu | Yansong Feng | Tianyi Li | Mark Steedman | Dongyan Zhao
Findings of the Association for Computational Linguistics: ACL 2023

Entailment graphs (EGs) with predicates as nodes and entailment relations as edges are typically incomplete, while EGs in different languages are often complementary to each other. In this paper, we propose a new task, multilingual entailment graph enhancement, which aims to utilize the entailment information from one EG to enhance another EG in a different language. The ultimate goal is to obtain an enhanced EG containing richer and more accurate entailment information. We present an align-then-enhance framework (ATE) to achieve accurate multilingual entailment graph enhancement, which first exploits a cross-graph guided interaction mechanism to automatically discover potential equivalent predicates between different EGs and then constructs more accurate enhanced entailment graphs based on soft predicate alignments. Extensive experiments show that ATE achieves better and more robust predicate alignment results between different EGs, and the enhanced entailment graphs generated by ATE outperform the original graphs for entailment detection.

pdf
Sources of Hallucination by Large Language Models on Inference Tasks
Nick McKenna | Tianyi Li | Liang Cheng | Mohammad Hosseini | Mark Johnson | Mark Steedman
Findings of the Association for Computational Linguistics: EMNLP 2023

Large Language Models (LLMs) are claimed to be capable of Natural Language Inference (NLI), necessary for applied tasks like question answering and summarization. We present a series of behavioral studies on several LLM families (LLaMA, GPT-3.5, and PaLM) which probe their behavior using controlled experiments. We establish two biases originating from pretraining which predict much of their behavior, and show that these are major sources of hallucination in generative LLMs. First, memorization at the level of sentences: we show that, regardless of the premise, models falsely label NLI test samples as entailing when the hypothesis is attested in training data, and that entities are used as “indices’ to access the memorized data. Second, statistical patterns of usage learned at the level of corpora: we further show a similar effect when the premise predicate is less frequent than that of the hypothesis in the training data, a bias following from previous studies. We demonstrate that LLMs perform significantly worse on NLI test samples which do not conform to these biases than those which do, and we offer these as valuable controls for future LLM evaluation.

pdf
Smoothing Entailment Graphs with Language Models
Nick McKenna | Tianyi Li | Mark Johnson | Mark Steedman
Proceedings of the 13th International Joint Conference on Natural Language Processing and the 3rd Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics (Volume 1: Long Papers)

pdf
Complementary Roles of Inference and Language Models in QA
Liang Cheng | Mohammad Javad Hosseini | Mark Steedman
Proceedings of the 2nd Workshop on Pattern-based Approaches to NLP in the Age of Deep Learning

Answering open-domain questions through unsupervised methods poses challenges for both machine-reading (MR) and language model (LM) -based approaches. The MR-based approach suffers from sparsity issues in extracted knowledge graphs (KGs), while the performance of the LM-based approach significantly depends on the quality of the retrieved context for questions. In this paper, we compare these approaches and propose a novel methodology that leverages directional predicate entailment (inference) to address these limitations. We use entailment graphs (EGs), with natural language predicates as nodes and entailment as edges, to enhance parsed KGs by inferring unseen assertions, effectively mitigating the sparsity problem in the MR-based approach. We also show EGs improve context retrieval for the LM-based approach. Additionally, we present a Boolean QA task, demonstrating that EGs exhibit comparable directional inference capabilities to large language models (LLMs). Our results highlight the importance of inference in open-domain QA and the improvements brought by leveraging EGs.

2022

pdf
Zero-shot Cross-Linguistic Learning of Event Semantics
Malihe Alikhani | Thomas Kober | Bashar Alhafni | Yue Chen | Mert Inan | Elizabeth Nielsen | Shahab Raji | Mark Steedman | Matthew Stone
Proceedings of the 15th International Conference on Natural Language Generation

pdf
Sentence-Incremental Neural Coreference Resolution
Matt Grenander | Shay B. Cohen | Mark Steedman
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing

We propose a sentence-incremental neural coreference resolution system which incrementally builds clusters after marking mention boundaries in a shift-reduce method. The system is aimed at bridging two recent approaches at coreference resolution: (1) state-of-the-art non-incremental models that incur quadratic complexity in document length with high computational cost, and (2) memory network-based models which operate incrementally but do not generalize beyond pronouns. For comparison, we simulate an incremental setting by constraining non-incremental systems to form partial coreference chains before observing new sentences. In this setting, our system outperforms comparable state-of-the-art methods by 2 F1 on OntoNotes and 6.8 F1 on the CODI-CRAC 2021 corpus. In a conventional coreference setup, our system achieves 76.3 F1 on OntoNotes and 45.5 F1 on CODI-CRAC 2021, which is comparable to state-of-the-art baselines. We also analyze variations of our system and show that the degree of incrementality in the encoder has a surprisingly large effect on the resulting performance.

pdf
Cross-lingual Inference with A Chinese Entailment Graph
Tianyi Li | Sabine Weber | Mohammad Javad Hosseini | Liane Guillou | Mark Steedman
Findings of the Association for Computational Linguistics: ACL 2022

Predicate entailment detection is a crucial task for question-answering from text, where previous work has explored unsupervised learning of entailment graphs from typed open relation triples. In this paper, we present the first pipeline for building Chinese entailment graphs, which involves a novel high-recall open relation extraction (ORE) method and the first Chinese fine-grained entity typing dataset under the FIGER type ontology. Through experiments on the Levy-Holt dataset, we verify the strength of our Chinese entailment graph, and reveal the cross-lingual complementarity: on the parallel Levy-Holt dataset, an ensemble of Chinese and English entailment graphs outperforms both monolingual graphs, and raises unsupervised SOTA by 4.7 AUC points.

pdf
Language Models Are Poor Learners of Directional Inference
Tianyi Li | Mohammad Javad Hosseini | Sabine Weber | Mark Steedman
Findings of the Association for Computational Linguistics: EMNLP 2022

We examine LMs’ competence of directional predicate entailments by supervised fine-tuning with prompts. Our analysis shows that contrary to their apparent success on standard NLI, LMs show limited ability to learn such directional inference; moreover, existing datasets fail to test directionality, and/or are infested by artefacts that can be learnt as proxy for entailments, yielding over-optimistic results. In response, we present BoOQA (Boolean Open QA), a robust multi-lingual evaluation benchmark for directional predicate entailments, extrinsic to existing training sets. On BoOQA, we establish baselines and show evidence of existing LM-prompting models being incompetent directional entailment learners, in contrast to entailment graphs, however limited by sparsity.

pdf
Erratum for “Formal Basis of a Language Universal”
Miloš Stanojević | Mark Steedman
Computational Linguistics, Volume 48, Issue 1 - March 2022

pdf
Universal Dependencies and Semantics for English and Hebrew Child-directed Speech
Ida Szubert | Omri Abend | Nathan Schneider | Samuel Gibbon | Sharon Goldwater | Mark Steedman
Proceedings of the Society for Computation in Linguistics 2022

2021

pdf
Zero-Shot Cross-Lingual Transfer is a Hard Baseline to Beat in German Fine-Grained Entity Typing
Sabine Weber | Mark Steedman
Proceedings of the Second Workshop on Insights from Negative Results in NLP

The training of NLP models often requires large amounts of labelled training data, which makes it difficult to expand existing models to new languages. While zero-shot cross-lingual transfer relies on multilingual word embeddings to apply a model trained on one language to another, Yarowski and Ngai (2001) propose the method of annotation projection to generate training data without manual annotation. This method was successfully used for the tasks of named entity recognition and coarse-grained entity typing, but we show that it is outperformed by zero-shot cross-lingual transfer when applied to the similar task of fine-grained entity typing. In our study of fine-grained entity typing with the FIGER type ontology for German, we show that annotation projection amplifies the English model’s tendency to underpredict level 2 labels and is beaten by zero-shot cross-lingual transfer on three novel test sets.

pdf
Blindness to Modality Helps Entailment Graph Mining
Liane Guillou | Sander Bijl de Vroe | Mark Johnson | Mark Steedman
Proceedings of the Second Workshop on Insights from Negative Results in NLP

Understanding linguistic modality is widely seen as important for downstream tasks such as Question Answering and Knowledge Graph Population. Entailment Graph learning might also be expected to benefit from attention to modality. We build Entailment Graphs using a news corpus filtered with a modality parser, and show that stripping modal modifiers from predicates in fact increases performance. This suggests that for some tasks, the pragmatics of modal modification of predicates allows them to contribute as evidence of entailment.

pdf
Modeling Incremental Language Comprehension in the Brain with Combinatory Categorial Grammar
Miloš Stanojević | Shohini Bhattasali | Donald Dunagan | Luca Campanelli | Mark Steedman | Jonathan Brennan | John Hale
Proceedings of the Workshop on Cognitive Modeling and Computational Linguistics

Hierarchical sentence structure plays a role in word-by-word human sentence comprehension, but it remains unclear how best to characterize this structure and unknown how exactly it would be recognized in a step-by-step process model. With a view towards sharpening this picture, we model the time course of hemodynamic activity within the brain during an extended episode of naturalistic language comprehension using Combinatory Categorial Grammar (CCG). CCG has well-defined incremental parsing algorithms, surface compositional semantics, and can explain long-range dependencies as well as complicated cases of coordination. We find that CCG-derived predictors improve a regression model of fMRI time course in six language-relevant brain regions, over and above predictors derived from context-free phrase structure. Adding a special Revealing operator to CCG parsing, one designed to handle right-adjunction, improves the fit in three of these regions. This evidence for CCG from neuroimaging bolsters the more general case for mildly context-sensitive grammars in the cognitive science of language.

pdf
Fine-grained General Entity Typing in German using GermaNet
Sabine Weber | Mark Steedman
Proceedings of the Fifteenth Workshop on Graph-Based Methods for Natural Language Processing (TextGraphs-15)

Fine-grained entity typing is important to tasks like relation extraction and knowledge base construction. We find however, that fine-grained entity typing systems perform poorly on general entities (e.g. “ex-president”) as compared to named entities (e.g. “Barack Obama”). This is due to a lack of general entities in existing training data sets. We show that this problem can be mitigated by automatically generating training data from WordNets. We use a German WordNet equivalent, GermaNet, to automatically generate training data for German general entity typing. We use this data to supplement named entity data to train a neural fine-grained entity typing system. This leads to a 10% improvement in accuracy of the prediction of level 1 FIGER types for German general entities, while decreasing named entity type prediction accuracy by only 1%.

pdf
Modality and Negation in Event Extraction
Sander Bijl de Vroe | Liane Guillou | Miloš Stanojević | Nick McKenna | Mark Steedman
Proceedings of the 4th Workshop on Challenges and Applications of Automated Extraction of Socio-political Events from Text (CASE 2021)

Language provides speakers with a rich system of modality for expressing thoughts about events, without being committed to their actual occurrence. Modality is commonly used in the political news domain, where both actual and possible courses of events are discussed. NLP systems struggle with these semantic phenomena, often incorrectly extracting events which did not happen, which can lead to issues in downstream applications. We present an open-domain, lexicon-based event extraction system that captures various types of modality. This information is valuable for Question Answering, Knowledge Graph construction and Fact-checking tasks, and our evaluation shows that the system is sufficiently strong to be used in downstream applications.

pdf bib
Formal Basis of a Language Universal
Miloš Stanojević | Mark Steedman
Computational Linguistics, Volume 47, Issue 1 - March 2021

Steedman (2020) proposes as a formal universal of natural language grammar that grammatical permutations of the kind that have given rise to transformational rules are limited to a class known to mathematicians and computer scientists as the “separable” permutations. This class of permutations is exactly the class that can be expressed in combinatory categorial grammars (CCGs). The excluded non-separable permutations do in fact seem to be absent in a number of studies of crosslinguistic variation in word order in nominal and verbal constructions. The number of permutations that are separable grows in the number n of lexical elements in the construction as the Large Schröder Number Sn−1. Because that number grows much more slowly than the n! number of all permutations, this generalization is also of considerable practical interest for computational applications such as parsing and machine translation. The present article examines the mathematical and computational origins of this restriction, and the reason it is exactly captured in CCG without the imposition of any further constraints.

pdf
Semi-Automatic Construction of Text-to-SQL Data for Domain Transfer
Tianyi Li | Sujian Li | Mark Steedman
Proceedings of the 17th International Conference on Parsing Technologies and the IWPT 2021 Shared Task on Parsing into Enhanced Universal Dependencies (IWPT 2021)

Strong and affordable in-domain data is a desirable asset when transferring trained semantic parsers to novel domains. As previous methods for semi-automatically constructing such data cannot handle the complexity of realistic SQL queries, we propose to construct SQL queries via context-dependent sampling, and introduce the concept of topic. Along with our SQL query construction method, we propose a novel pipeline of semi-automatic Text-to-SQL dataset construction that covers the broad space of SQL queries. We show that the created dataset is comparable with expert annotation along multiple dimensions, and is capable of improving domain transfer performance for SOTA semantic parsers.

pdf
Open-Domain Contextual Link Prediction and its Complementarity with Entailment Graphs
Mohammad Javad Hosseini | Shay B. Cohen | Mark Johnson | Mark Steedman
Findings of the Association for Computational Linguistics: EMNLP 2021

An open-domain knowledge graph (KG) has entities as nodes and natural language relations as edges, and is constructed by extracting (subject, relation, object) triples from text. The task of open-domain link prediction is to infer missing relations in the KG. Previous work has used standard link prediction for the task. Since triples are extracted from text, we can ground them in the larger textual context in which they were originally found. However, standard link prediction methods only rely on the KG structure and ignore the textual context that each triple was extracted from. In this paper, we introduce the new task of open-domain contextual link prediction which has access to both the textual context and the KG structure to perform link prediction. We build a dataset for the task and propose a model for it. Our experiments show that context is crucial in predicting missing relations. We also demonstrate the utility of contextual link prediction in discovering context-independent entailments between relations, in the form of entailment graphs (EG), in which the nodes are the relations. The reverse holds too: context-independent EGs assist in predicting relations in context.

pdf
Cross-lingual Intermediate Fine-tuning improves Dialogue State Tracking
Nikita Moghe | Mark Steedman | Alexandra Birch
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing

Recent progress in task-oriented neural dialogue systems is largely focused on a handful of languages, as annotation of training data is tedious and expensive. Machine translation has been used to make systems multilingual, but this can introduce a pipeline of errors. Another promising solution is using cross-lingual transfer learning through pretrained multilingual models. Existing methods train multilingual models with additional code-mixed task data or refine the cross-lingual representations through parallel ontologies. In this work, we enhance the transfer learning process by intermediate fine-tuning of pretrained multilingual models, where the multilingual models are fine-tuned with different but related data and/or tasks. Specifically, we use parallel and conversational movie subtitles datasets to design cross-lingual intermediate tasks suitable for downstream dialogue tasks. We use only 200K lines of parallel data for intermediate fine-tuning which is already available for 1782 language pairs. We test our approach on the cross-lingual dialogue state tracking task for the parallel MultiWoZ (English -> Chinese, Chinese -> English) and Multilingual WoZ (English -> German, English -> Italian) datasets. We achieve impressive improvements (> 20% on joint goal accuracy) on the parallel MultiWoZ dataset and the Multilingual WoZ dataset over the vanilla baseline with only 10% of the target language task data and zero-shot setup respectively.

pdf
Multivalent Entailment Graphs for Question Answering
Nick McKenna | Liane Guillou | Mohammad Javad Hosseini | Sander Bijl de Vroe | Mark Johnson | Mark Steedman
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing

Drawing inferences between open-domain natural language predicates is a necessity for true language understanding. There has been much progress in unsupervised learning of entailment graphs for this purpose. We make three contributions: (1) we reinterpret the Distributional Inclusion Hypothesis to model entailment between predicates of different valencies, like DEFEAT(Biden, Trump) entails WIN(Biden); (2) we actualize this theory by learning unsupervised Multivalent Entailment Graphs of open-domain predicates; and (3) we demonstrate the capabilities of these graphs on a novel question answering task. We show that directional entailment is more helpful for inference than non-directional similarity on questions of fine-grained semantics. We also show that drawing on evidence across valencies answers more questions than by using only the same valency evidence.

pdf
Computing All Quantifier Scopes with CCG
Miloš Stanojević | Mark Steedman
Proceedings of the 14th International Conference on Computational Semantics (IWCS)

We present a method for computing all quantifer scopes that can be extracted from a single CCG derivation. To do that we build on the proposal of Steedman (1999, 2011) where all existential quantifiers are treated as Skolem functions. We extend the approach by introducing a better packed representation of all possible specifications that also includes node addresses where the specifications happen. These addresses are necessary for recovering all, and only, possible readings.

2020

pdf
Learning Negation Scope from Syntactic Structure
Nick McKenna | Mark Steedman
Proceedings of the Ninth Joint Conference on Lexical and Computational Semantics

We present a semi-supervised model which learns the semantics of negation purely through analysis of syntactic structure. Linguistic theory posits that the semantics of negation can be understood purely syntactically, though recent research relies on combining a variety of features including part-of-speech tags, word embeddings, and semantic representations to achieve high task performance. Our simplified model returns to syntactic theory and achieves state-of-the-art performance on the task of Negation Scope Detection while demonstrating the tight relationship between the syntax and semantics of negation.

pdf
Aspectuality Across Genre: A Distributional Semantics Approach
Thomas Kober | Malihe Alikhani | Matthew Stone | Mark Steedman
Proceedings of the 28th International Conference on Computational Linguistics

The interpretation of the lexical aspect of verbs in English plays a crucial role in tasks such as recognizing textual entailment and learning discourse-level inferences. We show that two elementary dimensions of aspectual class, states vs. events, and telic vs. atelic events, can be modelled effectively with distributional semantics. We find that a verb’s local context is most indicative of its aspectual class, and we demonstrate that closed class words tend to be stronger discriminating contexts than content words. Our approach outperforms previous work on three datasets. Further, we present a new dataset of human-human conversations annotated with lexical aspects and present experiments that show the correlation of telicity with genre and discourse goals.

pdf
Incorporating Temporal Information in Entailment Graph Mining
Liane Guillou | Sander Bijl de Vroe | Mohammad Javad Hosseini | Mark Johnson | Mark Steedman
Proceedings of the Graph-based Methods for Natural Language Processing (TextGraphs)

We present a novel method for injecting temporality into entailment graphs to address the problem of spurious entailments, which may arise from similar but temporally distinct events involving the same pair of entities. We focus on the sports domain in which the same pairs of teams play on different occasions, with different outcomes. We present an unsupervised model that aims to learn entailments such as win/lose → play, while avoiding the pitfall of learning non-entailments such as win ̸→ lose. We evaluate our model on a manually constructed dataset, showing that incorporating time intervals and applying a temporal window around them, are effective strategies.

pdf
Max-Margin Incremental CCG Parsing
Miloš Stanojević | Mark Steedman
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics

Incremental syntactic parsing has been an active research area both for cognitive scientists trying to model human sentence processing and for NLP researchers attempting to combine incremental parsing with language modelling for ASR and MT. Most effort has been directed at designing the right transition mechanism, but less has been done to answer the question of what a probabilistic model for those transition parsers should look like. A very incremental transition mechanism of a recently proposed CCG parser when trained in straightforward locally normalised discriminative fashion produces very bad results on English CCGbank. We identify three biases as the causes of this problem: label bias, exposure bias and imbalanced probabilities bias. While known techniques for tackling these biases improve results, they still do not make the parser state of the art. Instead, we tackle all of these three biases at the same time using an improved version of beam search optimisation that minimises all beam search violations instead of minimising only the biggest violation. The new incremental parser gives better results than all previously published incremental CCG parsers, and outperforms even some widely used non-incremental CCG parsers.

pdf
Span-Based LCFRS-2 Parsing
Miloš Stanojević | Mark Steedman
Proceedings of the 16th International Conference on Parsing Technologies and the IWPT 2020 Shared Task on Parsing into Enhanced Universal Dependencies

The earliest models for discontinuous constituency parsers used mildly context-sensitive grammars, but the fashion has changed in recent years to grammar-less transition-based parsers that use strong neural probabilistic models to greedily predict transitions. We argue that grammar-based approaches still have something to contribute on top of what is offered by transition-based parsers. Concretely, by using a grammar formalism to restrict the space of possible trees we can use dynamic programming parsing algorithms for exact search for the most probable tree. Previous chart-based parsers for discontinuous formalisms used probabilistically weak generative models. We instead use a span-based discriminative neural model that preserves the dynamic programming properties of the chart parsers. Our parser does not use an explicit grammar, but it does use explicit grammar formalism constraints: we generate only trees that are within the LCFRS-2 formalism. These properties allow us to construct a new parsing algorithm that runs in lower worst-case time complexity of O(l nˆ4 +nˆ6), where n is the sentence length and l is the number of unique non-terminal labels. This parser is efficient in practice, provides best results among chart-based parsers, and is competitive with the best transition based parsers. We also show that the main bottleneck for further improvement in performance is in the restriction of fan-out to degree 2. We show that well-nestedness is helpful in speeding up parsing, but lowers accuracy.

pdf
The Role of Reentrancies in Abstract Meaning Representation Parsing
Ida Szubert | Marco Damonte | Shay B. Cohen | Mark Steedman
Findings of the Association for Computational Linguistics: EMNLP 2020

Abstract Meaning Representation (AMR) parsing aims at converting sentences into AMR representations. These are graphs and not trees because AMR supports reentrancies (nodes with more than one parent). Following previous findings on the importance of reen- trancies for AMR, we empirically find and discuss several linguistic phenomena respon- sible for reentrancies in AMR, some of which have not received attention before. We cate- gorize the types of errors AMR parsers make with respect to reentrancies. Furthermore, we find that correcting these errors provides an in- crease of up to 5% Smatch in parsing perfor- mance and 20% in reentrancy prediction

pdf
The role of context in neural pitch accent detection in English
Elizabeth Nielsen | Mark Steedman | Sharon Goldwater
Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)

Prosody is a rich information source in natural language, serving as a marker for phenomena such as contrast. In order to make this information available to downstream tasks, we need a way to detect prosodic events in speech. We propose a new model for pitch accent detection, inspired by the work of Stehwien et al. (2018), who presented a CNN-based model for this task. Our model makes greater use of context by using full utterances as input and adding an LSTM layer. We find that these innovations lead to an improvement from 87.5% to 88.7% accuracy on pitch accent detection on American English speech in the Boston University Radio News Corpus, a state-of-the-art result. We also find that a simple baseline that just predicts a pitch accent on every content word yields 82.2% accuracy, and we suggest that this is the appropriate baseline for this task. Finally, we conduct ablation tests that show pitch is the most important acoustic feature for this task and this corpus.

2019

pdf
Wide-Coverage Neural A* Parsing for Minimalist Grammars
John Torr | Miloš Stanojević | Mark Steedman | Shay B. Cohen
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics

Minimalist Grammars (Stabler, 1997) are a computationally oriented, and rigorous formalisation of many aspects of Chomsky’s (1995) Minimalist Program. This paper presents the first ever application of this formalism to the task of realistic wide-coverage parsing. The parser uses a linguistically expressive yet highly constrained grammar, together with an adaptation of the A* search algorithm currently used in CCG parsing (Lewis and Steedman, 2014; Lewis et al., 2016), with supertag probabilities provided by a bi-LSTM neural network supertagger trained on MGbank, a corpus of MG derivation trees. We report on some promising initial experimental results for overall dependency recovery as well as on the recovery of certain unbounded long distance dependencies. Finally, although like other MG parsers, ours has a high order polynomial worst case time complexity, we show that in practice its expected time complexity is cubic in the length of the sentence. The parser is publicly available.

pdf
Duality of Link Prediction and Entailment Graph Induction
Mohammad Javad Hosseini | Shay B. Cohen | Mark Johnson | Mark Steedman
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics

Link prediction and entailment graph induction are often treated as different problems. In this paper, we show that these two problems are actually complementary. We train a link prediction model on a knowledge graph of assertions extracted from raw text. We propose an entailment score that exploits the new facts discovered by the link prediction model, and then form entailment graphs between relations. We further use the learned entailments to predict improved link prediction scores. Our results show that the two tasks can benefit from each other. The new entailment score outperforms prior state-of-the-art results on a standard entialment dataset and the new link prediction scores show improvements over the raw link prediction scores.

pdf
CCG Parsing Algorithm with Incremental Tree Rotation
Miloš Stanojević | Mark Steedman
Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)

The main obstacle to incremental sentence processing arises from right-branching constituent structures, which are present in the majority of English sentences, as well as optional constituents that adjoin on the right, such as right adjuncts and right conjuncts. In CCG, many right-branching derivations can be replaced by semantically equivalent left-branching incremental derivations. The problem of right-adjunction is more resistant to solution, and has been tackled in the past using revealing-based approaches that often rely either on the higher-order unification over lambda terms (Pareschi and Steedman,1987) or heuristics over dependency representations that do not cover the whole CCGbank (Ambati et al., 2015). We propose a new incremental parsing algorithm for CCG following the same revealing tradition of work but having a purely syntactic approach that does not depend on access to a distinct level of semantic representation. This algorithm can cover the whole CCGbank, with greater incrementality and accuracy than previous proposals.

pdf
Node Embeddings for Graph Merging: Case of Knowledge Graph Construction
Ida Szubert | Mark Steedman
Proceedings of the Thirteenth Workshop on Graph-Based Methods for Natural Language Processing (TextGraphs-13)

Combining two graphs requires merging the nodes which are counterparts of each other. In this process errors occur, resulting in incorrect merging or incorrect failure to merge. We find a high prevalence of such errors when using AskNET, an algorithm for building Knowledge Graphs from text corpora. AskNET node matching method uses string similarity, which we propose to replace with vector embedding similarity. We explore graph-based and word-based embedding models and show an overall error reduction of from 56% to 23.6%, with a reduction of over a half in both types of incorrect node matching.

pdf
Temporal and Aspectual Entailment
Thomas Kober | Sander Bijl de Vroe | Mark Steedman
Proceedings of the 13th International Conference on Computational Semantics - Long Papers

Inferences regarding “Jane’s arrival in London” from predications such as “Jane is going to London” or “Jane has gone to London” depend on tense and aspect of the predications. Tense determines the temporal location of the predication in the past, present or future of the time of utterance. The aspectual auxiliaries on the other hand specify the internal constituency of the event, i.e. whether the event of “going to London” is completed and whether its consequences hold at that time or not. While tense and aspect are among the most important factors for determining natural language inference, there has been very little work to show whether modern embedding models capture these semantic concepts. In this paper we propose a novel entailment dataset and analyse the ability of contextualised word representations to perform inference on predications across aspectual types and tenses. We show that they encode a substantial amount of information relating to tense and aspect, but fail to consistently model inferences that require reasoning with these semantic properties.


Construction and Alignment of Multilingual Entailment Graphs for Semantic Inference
Sabine Weber | Mark Steedman
Proceedings of the 2019 Workshop on Widening NLP

This paper presents ongoing work on the construction and alignment of predicate entailment graphs in English and German. We extract predicate-argument pairs from large corpora of monolingual English and German news text and construct monolingual paraphrase clusters and entailment graphs. We use an aligned subset of entities to derive the bilingual alignment of entities and relations, and achieve better than baseline results on a translated subset of a predicate entailment data set (Levy and Dagan, 2016) and the German portion of XNLI (Conneau et al., 2018).

2018

pdf
Character-Level Models versus Morphology in Semantic Role Labeling
Gözde Gül Şahin | Mark Steedman
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

Character-level models have become a popular approach specially for their accessibility and ability to handle unseen data. However, little is known on their ability to reveal the underlying morphological structure of a word, which is a crucial skill for high-level semantic analysis tasks, such as semantic role labeling (SRL). In this work, we train various types of SRL models that use word, character and morphology level information and analyze how performance of characters compare to words and morphology for several languages. We conduct an in-depth error analysis for each morphological typology and analyze the strengths and limitations of character-level models that relate to out-of-domain data, training data size, long range dependencies and model complexity. Our exhaustive analyses shed light on important characteristics of character-level models and their semantic capability.

pdf
Predicting accuracy on large datasets from smaller pilot data
Mark Johnson | Peter Anderson | Mark Dras | Mark Steedman
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)

Because obtaining training data is often the most difficult part of an NLP or ML project, we develop methods for predicting how much data is required to achieve a desired test accuracy by extrapolating results from models trained on a small pilot training dataset. We model how accuracy varies as a function of training size on subsets of the pilot data, and use that model to predict how much training data would be required to achieve the desired accuracy. We introduce a new performance extrapolation task to evaluate how well different extrapolations predict accuracy on larger training sets. We show that details of hyperparameter optimisation and the extrapolation models can have dramatic effects in a document classification task. We believe this is an important first step in developing methods for estimating the resources required to meet specific engineering performance targets.

pdf bib
The Lost Combinator
Mark Steedman
Computational Linguistics, Volume 44, Issue 4 - December 2018

pdf
Learning Typed Entailment Graphs with Global Soft Constraints
Mohammad Javad Hosseini | Nathanael Chambers | Siva Reddy | Xavier R. Holt | Shay B. Cohen | Mark Johnson | Mark Steedman
Transactions of the Association for Computational Linguistics, Volume 6

This paper presents a new method for learning typed entailment graphs from text. We extract predicate-argument structures from multiple-source news corpora, and compute local distributional similarity scores to learn entailments between predicates with typed arguments (e.g., person contracted disease). Previous work has used transitivity constraints to improve local decisions, but these constraints are intractable on large graphs. We instead propose a scalable method that learns globally consistent similarity scores based on new soft constraints that consider both the structures across typed entailment graphs and inside each graph. Learning takes only a few hours to run over 100K predicates and our results show large improvements over local similarity scores on two entailment data sets. We further show improvements over paraphrases and entailments from the Paraphrase Database, and prior state-of-the-art entailment graphs. We show that the entailment graphs improve performance in a downstream task.

pdf
Data Augmentation via Dependency Tree Morphing for Low-Resource Languages
Gözde Gül Şahin | Mark Steedman
Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing

Neural NLP systems achieve high scores in the presence of sizable training dataset. Lack of such datasets leads to poor system performances in the case low-resource languages. We present two simple text augmentation techniques using dependency trees, inspired from image processing. We “crop” sentences by removing dependency links, and we “rotate” sentences by moving the tree fragments around the root. We apply these techniques to augment the training sets of low-resource languages in Universal Dependencies project. We implement a character-level sequence tagging model and evaluate the augmented datasets on part-of-speech tagging task. We show that crop and rotate provides improvements over the models trained with non-augmented data for majority of the languages, especially for languages with rich case marking systems.

2017

pdf
Universal Semantic Parsing
Siva Reddy | Oscar Täckström | Slav Petrov | Mark Steedman | Mirella Lapata
Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing

Universal Dependencies (UD) offer a uniform cross-lingual syntactic representation, with the aim of advancing multilingual applications. Recent work shows that semantic parsing can be accomplished by transforming syntactic dependencies to logical forms. However, this work is limited to English, and cannot process dependency graphs, which allow handling complex phenomena such as control. In this work, we introduce UDepLambda, a semantic interface for UD, which maps natural language to logical forms in an almost language-independent fashion and can process dependency graphs. We perform experiments on question answering against Freebase and provide German and Spanish translations of the WebQuestions and GraphQuestions datasets to facilitate multilingual evaluation. Results show that UDepLambda outperforms strong baselines across languages and datasets. For English, it achieves a 4.9 F1 point improvement over the state-of-the-art on GraphQuestions.

2016

pdf
Evaluating Induced CCG Parsers on Grounded Semantic Parsing
Yonatan Bisk | Siva Reddy | John Blitzer | Julia Hockenmaier | Mark Steedman
Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing

pdf
Transforming Dependency Structures to Logical Forms for Semantic Parsing
Siva Reddy | Oscar Täckström | Michael Collins | Tom Kwiatkowski | Dipanjan Das | Mark Steedman | Mirella Lapata
Transactions of the Association for Computational Linguistics, Volume 4

The strongly typed syntax of grammar formalisms such as CCG, TAG, LFG and HPSG offers a synchronous framework for deriving syntactic structures and semantic logical forms. In contrast—partly due to the lack of a strong type system—dependency structures are easy to annotate and have become a widely used form of syntactic analysis for many languages. However, the lack of a type system makes a formal mechanism for deriving logical forms from dependency structures challenging. We address this by introducing a robust system based on the lambda calculus for deriving neo-Davidsonian logical forms from dependency trees. These logical forms are then used for semantic parsing of natural language to Freebase. Experiments on the Free917 and Web-Questions datasets show that our representation is superior to the original dependency trees and that it outperforms a CCG-based representation on this task. Compared to prior work, we obtain the strongest result to date on Free917 and competitive results on WebQuestions.

pdf
Shift-Reduce CCG Parsing using Neural Network Models
Bharat Ram Ambati | Tejaswini Deoskar | Mark Steedman
Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies

pdf
Assessing Relative Sentence Complexity using an Incremental CCG Parser
Bharat Ram Ambati | Siva Reddy | Mark Steedman
Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies

2015

pdf
An Incremental Algorithm for Transition-based CCG Parsing
Bharat Ram Ambati | Tejaswini Deoskar | Mark Johnson | Mark Steedman
Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies

pdf
Lexical Event Ordering with an Edge-Factored Model
Omri Abend | Shay B. Cohen | Mark Steedman
Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies

pdf
Parser Adaptation to the Biomedical Domain without Re-Training
Jeff Mitchell | Mark Steedman
Proceedings of the Sixth International Workshop on Health Text Mining and Information Analysis

pdf
Orthogonality of Syntax and Semantics within Distributional Spaces
Jeff Mitchell | Mark Steedman
Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)

pdf
A Computationally Efficient Algorithm for Learning Topical Collocation Models
Zhendong Zhao | Lan Du | Benjamin Börschinger | John K Pate | Massimiliano Ciaramita | Mark Steedman | Mark Johnson
Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)

2014

pdf bib
Robust Semantics for Semantic Parsing
Mark Steedman
Proceedings of the 28th Pacific Asia Conference on Language, Information and Computing

pdf
Combining Formal and Distributional Models of Temporal and Intensional Semantics
Mike Lewis | Mark Steedman
Proceedings of the ACL 2014 Workshop on Semantic Parsing

pdf
Improved CCG Parsing with Semi-supervised Supertagging
Mike Lewis | Mark Steedman
Transactions of the Association for Computational Linguistics, Volume 2

Current supervised parsers are limited by the size of their labelled training data, making improving them with unlabelled data an important goal. We show how a state-of-the-art CCG parser can be enhanced, by predicting lexical categories using unsupervised vector-space embeddings of words. The use of word embeddings enables our model to better generalize from the labelled data, and allows us to accurately assign lexical categories without depending on a POS-tagger. Our approach leads to substantial improvements in dependency parsing results over the standard supervised CCG parser when evaluated on Wall Street Journal (0.8%), Wikipedia (1.8%) and biomedical (3.4%) text. We compare the performance of two recently proposed approaches for classification using a wide variety of word embeddings. We also give a detailed error analysis demonstrating where using embeddings outperforms traditional feature sets, and showing how including POS features can decrease accuracy.

pdf
Large-scale Semantic Parsing without Question-Answer Pairs
Siva Reddy | Mirella Lapata | Mark Steedman
Transactions of the Association for Computational Linguistics, Volume 2

In this paper we introduce a novel semantic parsing approach to query Freebase in natural language without requiring manual annotations or question-answer pairs. Our key insight is to represent natural language via semantic graphs whose topology shares many commonalities with Freebase. Given this representation, we conceptualize semantic parsing as a graph matching problem. Our model converts sentences to semantic graphs using CCG and subsequently grounds them to Freebase guided by denotations as a form of weak supervision. Evaluation experiments on a subset of the Free917 and WebQuestions benchmark datasets show our semantic parser improves over the state of the art.

pdf
A* CCG Parsing with a Supertag-factored Model
Mike Lewis | Mark Steedman
Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)

pdf
Lexical Inference over Multi-Word Predicates: A Distributional Approach
Omri Abend | Shay B. Cohen | Mark Steedman
Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

pdf
Generalizing a Strongly Lexicalized Parser using Unlabeled Data
Tejaswini Deoskar | Christos Christodoulopoulos | Alexandra Birch | Mark Steedman
Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics

pdf
A Generative Model for User Simulation in a Spatial Navigation Domain
Aciel Eshky | Ben Allison | Subramanian Ramamoorthy | Mark Steedman
Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics

pdf
Improving Dependency Parsers using Combinatory Categorial Grammar
Bharat Ram Ambati | Tejaswini Deoskar | Mark Steedman
Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics, volume 2: Short Papers

2013

pdf
Unsupervised Induction of Cross-Lingual Semantic Relations
Mike Lewis | Mark Steedman
Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing

pdf
Using CCG categories to improve Hindi dependency parsing
Bharat Ram Ambati | Tejaswini Deoskar | Mark Steedman
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)

pdf
The Effect of Higher-Order Dependency Features in Discriminative Phrase-Structure Parsing
Greg Coppola | Mark Steedman
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)

pdf
Combined Distributional and Logical Semantics
Mike Lewis | Mark Steedman
Transactions of the Association for Computational Linguistics, Volume 1

We introduce a new approach to semantics which combines the benefits of distributional and formal logical semantics. Distributional models have been successful in modelling the meanings of content words, but logical semantics is necessary to adequately represent many function words. We follow formal semantics in mapping language to logical representations, but differ in that the relational constants used are induced by offline distributional clustering at the level of predicate-argument structure. Our clustering algorithm is highly scalable, allowing us to run on corpora the size of Gigaword. Different senses of a word are disambiguated based on their induced types. We outperform a variety of existing approaches on a wide-coverage question answering task, and demonstrate the ability to make complex multi-sentence inferences involving quantifiers on the FraCaS suite.

pdf bib
Robust Computational Semantics
Mark Steedman
Proceedings of the Australasian Language Technology Association Workshop 2013 (ALTA 2013)

2012

pdf
Probabilistic Models of Grammar Acquisition
Mark Steedman
Proceedings of the Workshop on Computational Models of Language Acquisition and Loss

pdf
Turning the pipeline into a loop: Iterated unsupervised dependency parsing and PoS induction
Christos Christodoulopoulos | Sharon Goldwater | Mark Steedman
Proceedings of the NAACL-HLT Workshop on the Induction of Linguistic Structure

pdf
A Probabilistic Model of Syntactic and Semantic Acquisition from Child-Directed Utterances and their Meanings
Tom Kwiatkowski | Sharon Goldwater | Luke Zettlemoyer | Mark Steedman
Proceedings of the 13th Conference of the European Chapter of the Association for Computational Linguistics

pdf
Generative Goal-Driven User Simulation for Dialog Management
Aciel Eshky | Ben Allison | Mark Steedman
Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning

2011

pdf bib
Computing Scope in a CCG Parser
Mark Steedman
Proceedings of the 12th International Conference on Parsing Technologies

pdf
Simple Semi-Supervised Learning for Prepositional Phrase Attachment
Gregory F. Coppola | Alexandra Birch | Tejaswini Deoskar | Mark Steedman
Proceedings of the 12th International Conference on Parsing Technologies

pdf
Grammar Induction from Text Using Small Syntactic Prototypes
Prachya Boonkwan | Mark Steedman
Proceedings of 5th International Joint Conference on Natural Language Processing

pdf
A Bayesian Mixture Model for PoS Induction Using Multiple Features
Christos Christodoulopoulos | Sharon Goldwater | Mark Steedman
Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing

pdf
Semi-supervised CCG Lexicon Extension
Emily Thomforde | Mark Steedman
Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing

pdf
Lexical Generalization in CCG Grammar Induction for Semantic Parsing
Tom Kwiatkowski | Luke Zettlemoyer | Sharon Goldwater | Mark Steedman
Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing

2010

pdf
Two Decades of Unsupervised POS Induction: How Far Have We Come?
Christos Christodoulopoulos | Sharon Goldwater | Mark Steedman
Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing

pdf
Inducing Probabilistic CCG Grammars from Logical Form with Higher-Order Unification
Tom Kwiatkowksi | Luke Zettlemoyer | Sharon Goldwater | Mark Steedman
Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing

pdf
A Multi-Dimensional Analysis of Japanese Benefactives: The Case of the Yaru-Construction
Akira Otani | Mark Steedman
Proceedings of the 24th Pacific Asia Conference on Language, Information and Computation

2009

pdf
Unbounded Dependency Recovery for Parser Evaluation
Laura Rimell | Stephen Clark | Mark Steedman
Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing

pdf
Note on Japanese Epistemic Verb Constructions: A Surface-Compositional Analysis
Akira Ohtani | Mark Steedman
Proceedings of the 23rd Pacific Asia Conference on Language, Information and Computation, Volume 1

2008

pdf
On Japanese Desiderative Constructions
Akira Ohtani | Mark Steedman
Proceedings of the 22nd Pacific Asia Conference on Language, Information and Computation

pdf
Last Words: On Becoming a Discipline
Mark Steedman
Computational Linguistics, Volume 34, Number 1, March 2008

2007

pdf
Case, Coordination, and Information Structure in Japanese
Akira Otani | Mark Steedman
Proceedings of the 21st Pacific Asia Conference on Language, Information and Computation

pdf
CCGbank: A Corpus of CCG Derivations and Dependency Structures Extracted from the Penn Treebank
Julia Hockenmaier | Mark Steedman
Computational Linguistics, Volume 33, Number 3, September 2007

pdf
Planning Dialog Actions
Mark Steedman | Ronald Petrick
Proceedings of the 8th SIGdial Workshop on Discourse and Dialogue

2005

pdf
A Framework for Annotating Information Structure in Discourse
Sasha Calhoun | Malvina Nissim | Mark Steedman | Jason Brenier
Proceedings of the Workshop on Frontiers in Corpus Annotations II: Pie in the Sky

2004

pdf
Object-Extraction and Question-Parsing using CCG
Stephen Clark | Mark Steedman | James R. Curran
Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing

pdf
Wide-Coverage Semantic Representations from a CCG Parser
Johan Bos | Stephen Clark | Mark Steedman | James R. Curran | Julia Hockenmaier
COLING 2004: Proceedings of the 20th International Conference on Computational Linguistics

pdf
An Annotation Scheme for Information Status in Dialogue
Malvina Nissim | Shipra Dingare | Jean Carletta | Mark Steedman
Proceedings of the Fourth International Conference on Language Resources and Evaluation (LREC’04)

2003

pdf
Example Selection for Bootstrapping Statistical Parsers
Mark Steedman | Rebecca Hwa | Stephen Clark | Miles Osborne | Anoop Sarkar | Julia Hockenmaier | Paul Ruhlen | Steven Baker | Jeremiah Crim
Proceedings of the 2003 Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics

pdf
Bootstrapping statistical parsers from small datasets
Mark Steedman | Miles Osborne | Anoop Sarkar | Stephen Clark | Rebecca Hwa | Julia Hockenmaier | Paul Ruhlen | Steven Baker | Jeremiah Crim
10th Conference of the European Chapter of the Association for Computational Linguistics

2002

pdf
Building Deep Dependency Structures using a Wide-Coverage CCG Parser
Stephen Clark | Julia Hockenmaier | Mark Steedman
Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics

pdf
Generative Models for Statistical Parsing with Combinatory Categorial Grammar
Julia Hockenmaier | Mark Steedman
Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics

pdf
Acquiring Compact Lexicalized Grammars from a Cleaner Treebank
Julia Hockenmaier | Mark Steedman
Proceedings of the Third International Conference on Language Resources and Evaluation (LREC’02)

1999

pdf
Alternating Quantifier Scope in CCG
Mark Steedman
Proceedings of the 37th Annual Meeting of the Association for Computational Linguistics

1997

pdf
Making Use of Intonation in Interactive Dialogue Translation
Mark Steedman
Proceedings of the Fifth International Workshop on Parsing Technologies

Intonational information is frequently discarded in speech recognition, and assigned by default heuristics in text-to-speech generation. However, in many applications involving dialogue and interactive discourse, intonation conveys significant information, and we ignore it at our peril. Translating telephones and personal assistants are an interesting test case, in which the salience of rapidly shifting discourse topics and the fact that sentences are machine-generated, rather than written by humans, combine to make the application particularly vulnerable to our poor theoretical grasp of intonation and its functions. I will discuss a number of approaches to the problem for such applications, ranging from cheap tricks to a combinatory grammar-based theory of the semantics involved and a syntax-phonology interface for building and generating from interpretations.

1994

pdf
Information Based Intonation Synthesis
Scott Prevost | Mark Steedman
Human Language Technology: Proceedings of a Workshop held at Plainsboro, New Jersey, March 8-11, 1994

pdf
Research in Natural Language Processing
A. Joshi | M. Marcus | M. Steedman | B. Webber
Human Language Technology: Proceedings of a Workshop held at Plainsboro, New Jersey, March 8-11, 1994

1993

pdf
Natural Language Research
Aravind Joshi | Mitch Marcus | Mark Steedman | Bonnie Webber
Human Language Technology: Proceedings of a Workshop Held at Plainsboro, New Jersey, March 21-24, 1993

pdf
Generating Contextually Appropriate Intonation
Scott Prevost | Mark Steedman
Sixth Conference of the European Chapter of the Association for Computational Linguistics

1992

pdf
Natural Language Research
Aravind Joshi | Mitch Marcus | Mark Steedman | Bonnie Webber
Speech and Natural Language: Proceedings of a Workshop Held at Harriman, New York, February 23-26, 1992

1991

pdf
Type-Raising and Directionality in Combinatory Grammar
Mark Steedman
29th Annual Meeting of the Association for Computational Linguistics

pdf
Natural Language Research
Aravind K. Joshi | Mitch Marcus | Mark Steedman | Bonnie Webber
Speech and Natural Language: Proceedings of a Workshop Held at Pacific Grove, California, February 19-22, 1991

1990

pdf
Natural Language Research
Aravind Joshi | Mitch Marcus | Mark Steedman | Bonnie Webber
Speech and Natural Language: Proceedings of a Workshop Held at Hidden Valley, Pennsylvania, June 24-27,1990

pdf
Narrated Animation: A Case for Generation
Norman Badler | Mark Steedman | Bonnie Lynn Webber
Proceedings of the Fifth International Workshop on Natural Language Generation

pdf bib
Structure and Intonation in Spoken Language Understanding
Mark Steedman
28th Annual Meeting of the Association for Computational Linguistics

1989

pdf
Parsing Spoken Language Using Combinatory Grammars
Mark Steedman
Proceedings of the First International Workshop on Parsing Technologies

pdf
Natural Language Research
Aravind Joshi | Mitch Marcus | Mark Steedman | Bonnie Webber
Speech and Natural Language: Proceedings of a Workshop Held at Philadelphia, Pennsylvania, February 21-23, 1989

pdf
Intonation and Syntax in Spoken Language Systems
Mark Steedman
Speech and Natural Language: Proceedings of a Workshop Held at Philadelphia, Pennsylvania, February 21-23, 1989

1988

pdf
Temporal Ontology and Temporal Reference
Marc Moens | Mark Steedman
Computational Linguistics, Volume 14, Number 2, June 1988

1987

pdf bib
Temporal Ontology in Natural Language
Marc Moens | Mark Steedman
25th Annual Meeting of the Association for Computational Linguistics

pdf
A Lazy way to Chart-Parse with Categorial Grammars
Remo Pareschi | Mark Steedman
25th Annual Meeting of the Association for Computational Linguistics

Search
Co-authors