Ralph Weischedel

Also published as: Ralph M. Weischedel


2023

pdf
Remember what you did so you know what to do next
Manuel Ciosici | Alex Hedges | Yash Kankanampati | Justin Martin | Marjorie Freedman | Ralph Weischedel
Findings of the Association for Computational Linguistics: EMNLP 2023

We explore using the 6B parameter GPT-J language model to create a plan for a simulated robot to achieve 30 classes of goals in ScienceWorld, a text game simulator for elementary science experiments and for which previously published empirical work has shown large language models (LLM)s to be a poor fit (Wang et al., 2022). Using the Markov assumption, the LLM outperforms the state-of-the-art based on reinforcement learning by a factor of 1.4. When we fill the LLM’s input buffer with as many prior steps as will fit, improvement rises to 3.3x. Even when training on only 6.5% of the training data, we observe a 2.3x improvement over the state-of-the-art. Our experiments show that performance varies widely across the 30 classes of actions, indicating that averaging over tasks can hide significant performance issues.

pdf
ACQUIRED: A Dataset for Answering Counterfactual Questions In Real-Life Videos
Te-Lin Wu | Zi-Yi Dou | Qingyuan Hu | Yu Hou | Nischal Chandra | Marjorie Freedman | Ralph Weischedel | Nanyun Peng
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing

Multimodal counterfactual reasoning is a vital yet challenging ability for AI systems. It involves predicting the outcomes of hypothetical circumstances based on vision and language inputs, which enables AI models to learn from failures and explore hypothetical scenarios. Despite its importance, there are only a few datasets targeting the counterfactual reasoning abilities of multimodal models. Among them, they only cover reasoning over synthetic environments or specific types of events (e.g. traffic collisions), making them hard to reliably benchmark the model generalization ability in diverse real-world scenarios and reasoning dimensions. To overcome these limitations, we develop a video question answering dataset, ACQUIRED: it consists of 3.9K annotated videos, encompassing a wide range of event types and incorporating both first and third-person viewpoints, which ensures a focus on real-world diversity. In addition, each video is annotated with questions that span three distinct dimensions of reasoning, including physical, social, and temporal, which can comprehensively evaluate the model counterfactual abilities along multiple aspects. We benchmark our dataset against several state-of-the-art language-only and multimodal models and experimental results demonstrate a significant performance gap (>13%) between models and humans. The findings suggest that multimodal counterfactual reasoning remains an open challenge and ACQUIRED is a comprehensive and reliable benchmark for inspiring future research in this direction.

2022

pdf
Understanding Multimodal Procedural Knowledge by Sequencing Multimodal Instructional Manuals
Te-Lin Wu | Alex Spangher | Pegah Alipoormolabashi | Marjorie Freedman | Ralph Weischedel | Nanyun Peng
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

The ability to sequence unordered events is evidence of comprehension and reasoning about real world tasks/procedures. It is essential for applications such as task planning and multi-source instruction summarization. It often requires thorough understanding of temporal common sense and multimodal information, since these procedures are often conveyed by a combination of texts and images. While humans are capable of reasoning about and sequencing unordered procedural instructions, the extent to which the current machine learning methods possess such capability is still an open question. In this work, we benchmark models’ capability of reasoning over and sequencing unordered multimodal instructions by curating datasets from online instructional manuals and collecting comprehensive human annotations. We find current state-of-the-art models not only perform significantly worse than humans but also seem incapable of efficiently utilizing multimodal information. To improve machines’ performance on multimodal event sequencing, we propose sequence-aware pretraining techniques exploiting the sequential alignment properties of both texts and images, resulting in > 5% improvements on perfect match ratio.

2021

pdf
Plot-guided Adversarial Example Construction for Evaluating Open-domain Story Generation
Sarik Ghazarian | Zixi Liu | Akash S M | Ralph Weischedel | Aram Galstyan | Nanyun Peng
Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies

With the recent advances of open-domain story generation, the lack of reliable automatic evaluation metrics becomes an increasingly imperative issue that hinders the fast development of story generation. According to conducted researches in this regard, learnable evaluation metrics have promised more accurate assessments by having higher correlations with human judgments. A critical bottleneck of obtaining a reliable learnable evaluation metric is the lack of high-quality training data for classifiers to efficiently distinguish plausible and implausible machine-generated stories. Previous works relied on heuristically manipulated plausible examples to mimic possible system drawbacks such as repetition, contradiction, or irrelevant content in the text level, which can be unnatural and oversimplify the characteristics of implausible machine-generated stories. We propose to tackle these issues by generating a more comprehensive set of implausible stories using plots, which are structured representations of controllable factors used to generate stories. Since these plots are compact and structured, it is easier to manipulate them to generate text with targeted undesirable properties, while at the same time maintain the grammatical correctness and naturalness of the generated sentences. To improve the quality of generated implausible stories, we further apply the adversarial filtering procedure presented by (CITATION) to select a more nuanced set of implausible texts. Experiments show that the evaluation metrics trained on our generated data result in more reliable automatic assessments that correlate remarkably better with human judgments compared to the baselines.

pdf bib
Machine-Assisted Script Curation
Manuel Ciosici | Joseph Cummings | Mitchell DeHaven | Alex Hedges | Yash Kankanampati | Dong-Ho Lee | Ralph Weischedel | Marjorie Freedman
Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: Demonstrations

We describe Machine-Aided Script Curator (MASC), a system for human-machine collaborative script authoring. Scripts produced with MASC include (1) English descriptions of sub-events that comprise a larger, complex event; (2) event types for each of those events; (3) a record of entities expected to participate in multiple sub-events; and (4) temporal sequencing between the sub-events. MASC automates portions of the script creation process with suggestions for event types, links to Wikidata, and sub-events that may have been forgotten. We illustrate how these automations are useful to the script writer with a few case-study scripts.

pdf
Perhaps PTLMs Should Go to School – A Task to Assess Open Book and Closed Book QA
Manuel Ciosici | Joe Cecil | Dong-Ho Lee | Alex Hedges | Marjorie Freedman | Ralph Weischedel
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing

Our goal is to deliver a new task and leaderboard to stimulate research on question answering and pre-trained language models (PTLMs) to understand a significant instructional document, e.g., an introductory college textbook or a manual. PTLMs have shown great success in many question-answering tasks, given significant supervised training, but much less so in zero-shot settings. We propose a new task that includes two college-level introductory texts in the social sciences (American Government 2e) and humanities (U.S. History), hundreds of true/false statements based on review questions written by the textbook authors, validation/development tests based on the first eight chapters of the textbooks, blind tests based on the remaining textbook chapters, and baseline results given state-of-the-art PTLMs. Since the questions are balanced, random performance should be ~50%. T5, fine-tuned with BoolQ achieves the same performance, suggesting that the textbook’s content is not pre-represented in the PTLM. Taking the exam closed book, but having read the textbook (i.e., adding the textbook to T5’s pre-training), yields at best minor improvement (56%), suggesting that the PTLM may not have “understood” the textbook (or perhaps misunderstood the questions). Performance is better (~60%) when the exam is taken open-book (i.e., allowing the machine to automatically retrieve a paragraph and use it to answer the question).

2020

pdf
Learning to Generalize for Sequential Decision Making
Xusen Yin | Ralph Weischedel | Jonathan May
Findings of the Association for Computational Linguistics: EMNLP 2020

We consider problems of making sequences of decisions to accomplish tasks, interacting via the medium of language. These problems are often tackled with reinforcement learning approaches. We find that these models do not generalize well when applied to novel task domains. However, the large amount of computation necessary to adequately train and explore the search space of sequential decision making, under a reinforcement learning paradigm, precludes the inclusion of large contextualized language models, which might otherwise enable the desired generalization ability. We introduce a teacher-student imitation learning methodology and a means of converting a reinforcement learning model into a natural language understanding model. Together, these methodologies enable the introduction of contextualized language models into the sequential decision making problem space. We show that models can learn faster and generalize more, leveraging both the imitation learning and the reformulation. Our models exceed teacher performance on various held-out decision problems, by up to 7% on in-domain problems and 24% on out-of-domain problems.

pdf
Content Planning for Neural Story Generation with Aristotelian Rescoring
Seraphina Goldfarb-Tarrant | Tuhin Chakrabarty | Ralph Weischedel | Nanyun Peng
Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)

Long-form narrative text generated from large language models manages a fluent impersonation of human writing, but only at the local sentence level, and lacks structure or global cohesion. We posit that many of the problems of story generation can be addressed via high-quality content planning, and present a system that focuses on how to learn good plot structures to guide story generation. We utilize a plot-generation language model along with an ensemble of rescoring models that each implement an aspect of good story-writing as detailed in Aristotle’s Poetics. We find that stories written with our more principled plot-structure are both more relevant to a given prompt and higher quality than baselines that do not content plan, or that plan in an unprincipled way.

2019

pdf
Deep Structured Neural Network for Event Temporal Relation Extraction
Rujun Han | I-Hung Hsu | Mu Yang | Aram Galstyan | Ralph Weischedel | Nanyun Peng
Proceedings of the 23rd Conference on Computational Natural Language Learning (CoNLL)

We propose a novel deep structured learning framework for event temporal relation extraction. The model consists of 1) a recurrent neural network (RNN) to learn scoring functions for pair-wise relations, and 2) a structured support vector machine (SSVM) to make joint predictions. The neural network automatically learns representations that account for long-term contexts to provide robust features for the structured model, while the SSVM incorporates domain knowledge such as transitive closure of temporal relations as constraints to make better globally consistent decisions. By jointly training the two components, our model combines the benefits of both data-driven learning and knowledge exploitation. Experimental results on three high-quality event temporal relation datasets (TCR, MATRES, and TB-Dense) demonstrate that incorporated with pre-trained contextualized embeddings, the proposed model achieves significantly better performances than the state-of-the-art methods on all three datasets. We also provide thorough ablation studies to investigate our model.

2018

pdf
When ACE met KBP: End-to-End Evaluation of Knowledge Base Population with Component-level Annotation
Bonan Min | Marjorie Freedman | Roger Bock | Ralph Weischedel
Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)

pdf
Last Words: What Can Be Accomplished with the State of the Art in Information Extraction? A Personal View
Ralph Weischedel | Elizabeth Boschee
Computational Linguistics, Volume 44, Issue 4 - December 2018

Though information extraction (IE) research has more than a 25-year history, F1 scores remain low. Thus, one could question continued investment in IE research. In this article, we present three applications where information extraction of entities, relations, and/or events has been used, and note the common features that seem to have led to success. We also identify key research challenges whose solution seems essential for broader successes. Because a few practical deployments already exist and because breakthroughs on particular challenges would greatly broaden the technology’s deployment, further R and D investments are justified.

2017

pdf
Learning Transferable Representation for Bilingual Relation Extraction via Convolutional Neural Networks
Bonan Min | Zhuolin Jiang | Marjorie Freedman | Ralph Weischedel
Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 1: Long Papers)

Typically, relation extraction models are trained to extract instances of a relation ontology using only training data from a single language. However, the concepts represented by the relation ontology (e.g. ResidesIn, EmployeeOf) are language independent. The numbers of annotated examples available for a given ontology vary between languages. For example, there are far fewer annotated examples in Spanish and Japanese than English and Chinese. Furthermore, using only language-specific training data results in the need to manually annotate equivalently large amounts of training for each new language a system encounters. We propose a deep neural network to learn transferable, discriminative bilingual representation. Experiments on the ACE 2005 multilingual training corpus demonstrate that the joint training process results in significant improvement in relation classification performance over the monolingual counterparts. The learnt representation is discriminative and transferable between languages. When using 10% (25K English words, or 30K Chinese characters) of the training data, our approach results in doubling F1 compared to a monolingual baseline. We achieve comparable performance to the monolingual system trained with 250K English words (or 300K Chinese characters) With 50% of training data.

2013

pdf
Automatic Extraction of Linguistic Metaphors with LDA Topic Modeling
Ilana Heintz | Ryan Gabbard | Mahesh Srivastava | Dave Barner | Donald Black | Majorie Friedman | Ralph Weischedel
Proceedings of the First Workshop on Metaphor in NLP

2011

pdf
Extreme Extraction – Machine Reading in a Week
Marjorie Freedman | Lance Ramshaw | Elizabeth Boschee | Ryan Gabbard | Gary Kratkiewicz | Nicolas Ward | Ralph Weischedel
Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing

pdf
Coreference for Learning to Extract Relations: Yes Virginia, Coreference Matters
Ryan Gabbard | Marjorie Freedman | Ralph Weischedel
Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies

pdf
Language Use: What can it tell us?
Marjorie Freedman | Alex Baron | Vasin Punyakanok | Ralph Weischedel
Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies

pdf bib
CoNLL-2011 Shared Task: Modeling Unrestricted Coreference in OntoNotes
Sameer Pradhan | Lance Ramshaw | Mitchell Marcus | Martha Palmer | Ralph Weischedel | Nianwen Xue
Proceedings of the Fifteenth Conference on Computational Natural Language Learning: Shared Task

2010

pdf
String-to-Dependency Statistical Machine Translation
Libin Shen | Jinxi Xu | Ralph Weischedel
Computational Linguistics, Volume 36, Issue 4 - December 2010

pdf
Statistical Machine Translation with a Factorized Grammar
Libin Shen | Bing Zhang | Spyros Matsoukas | Jinxi Xu | Ralph Weischedel
Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing

pdf
Empirical Studies in Learning to Read
Marjorie Freedman | Edward Loper | Elizabeth Boschee | Ralph Weischedel
Proceedings of the NAACL HLT 2010 First International Workshop on Formalisms and Methodology for Learning by Reading

2009

pdf
Effective Use of Linguistic and Contextual Information for Statistical Machine Translation
Libin Shen | Jinxi Xu | Bing Zhang | Spyros Matsoukas | Ralph Weischedel
Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing

2008

pdf
A New String-to-Dependency Machine Translation Algorithm with a Target Dependency Language Model
Libin Shen | Jinxi Xu | Ralph Weischedel
Proceedings of ACL-08: HLT

2006

pdf
OntoNotes: The 90% Solution
Eduard Hovy | Mitchell Marcus | Martha Palmer | Lance Ramshaw | Ralph Weischedel
Proceedings of the Human Language Technology Conference of the NAACL, Companion Volume: Short Papers

2005

pdf
Combining Deep Linguistics Analysis and Surface Pattern Learning: A Hybrid Approach to Chinese Definitional Question Answering
Fuchun Peng | Ralph Weischedel | Ana Licuanan | Jinxi Xu
Proceedings of Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing

pdf
A Methodology for Extrinsically Evaluating Information Extraction Performance
Michael Crystal | Alex Baron | Katherine Godfrey | Linnea Micciulla | Yvette Tenney | Ralph Weischedel
Proceedings of Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing

2004

pdf
The Automatic Content Extraction (ACE) Program – Tasks, Data, and Evaluation
George Doddington | Alexis Mitchell | Mark Przybocki | Lance Ramshaw | Stephanie Strassel | Ralph Weischedel
Proceedings of the Fourth International Conference on Language Resources and Evaluation (LREC’04)

2001

pdf
Experiments in Multi-Modal Automatic Content Extraction
Lance Ramshaw | Elizabeth Boschee | Sergey Bratus | Scott Miller | Rebecca Stone | Ralph Weischedel | Alex Zamanian
Proceedings of the First International Conference on Human Language Technology Research

pdf
FactBrowser Demonstration
Scott Miller | Sergey Bratus | Lance Ramshaw | Ralph Weischedel | Alex Zamanian
Proceedings of the First International Conference on Human Language Technology Research

2000

pdf
Cross-lingual Information Retrieval Using Hidden Markov Models
Jinxi Xu | Ralph Weischedel
2000 Joint SIGDAT Conference on Empirical Methods in Natural Language Processing and Very Large Corpora

pdf
Named Entity Extraction from Noisy Input: Speech and OCR
David Miller | Sean Boisen | Richard Schwartz | Rebecca Stone | Ralph Weischedel
Sixth Applied Natural Language Processing Conference

pdf
A Novel Use of Statistical Parsing to Extract Information from Text
Scott Miller | Heidi Fox | Lance Ramshaw | Ralph Weischedel
1st Meeting of the North American Chapter of the Association for Computational Linguistics

pdf
Annotating Resources for Information Extraction
Sean Boisen | Michael R. Crystal | Richard Schwartz | Rebecca Stone | Ralph Weischedel
Proceedings of the Second International Conference on Language Resources and Evaluation (LREC’00)

1998

pdf
BBN: Description of the SIFT System as Used for MUC-7
Scott Miller | Michael Crystal | Heidi Fox | Lance Ramshaw | Richard Schwartz | Rebecca Stone | Ralph Weischedel | The Annotation Group
Seventh Message Understanding Conference (MUC-7): Proceedings of a Conference Held in Fairfax, Virginia, April 29 - May 1, 1998

pdf
Algorithms That Learn to Extract Information BBN: TIPSTER Phase III
Scott Miller | Michael Crystal | Heidi Fox | Lance Ramshaw | Richard Schwartz | Rebecca Stone | Ralph Weischedel
TIPSTER TEXT PROGRAM PHASE III: Proceedings of a Workshop held at Baltimore, Maryland, October 13-15, 1998

1997

pdf
Nymble: a High-Performance Learning Name-finder
Daniel M. Bikel | Scott Miller | Richard Schwartz | Ralph Weischedel
Fifth Conference on Applied Natural Language Processing

1996

pdf
The HOOKAH Information Extraction System
Chris Barclay | Sean Boisen | Clinton Hyde | Ralph Weischedel
TIPSTER TEXT PROGRAM PHASE II: Proceedings of a Workshop held at Vienna, Virginia, May 6-8, 1996

pdf
Chinese Information Extraction and Retrieval
Sean Boisen | Michael Crystal | Erik Peterson | Ralph Weischedel | John Broglio | Jamie Callan | Bruce Croft | Theresa Hand | Thomas Keenan | Mary Ellen Okurowski
TIPSTER TEXT PROGRAM PHASE II: Proceedings of a Workshop held at Vienna, Virginia, May 6-8, 1996

pdf
Progress in Information Extraction
Ralph Weischedel | Sean Boisen | Daniel Bikel | Robert Bobrow | Michael Crystal | William Ferguson | Allan Wechsler | The PLUM Research Group
TIPSTER TEXT PROGRAM PHASE II: Proceedings of a Workshop held at Vienna, Virginia, May 6-8, 1996

pdf
Approaches in MET (Multi-Lingual Entity Task)
Damaris Ayuso | Daniel Bikel | Tasha Hall | Erik Peterson | Ralph Weischedel | Patrick Jost
TIPSTER TEXT PROGRAM PHASE II: Proceedings of a Workshop held at Vienna, Virginia, May 6-8, 1996

1994

pdf
Robustness, Portability and Scalability Language Systems
Ralph Weischedel
Human Language Technology: Proceedings of a Workshop held at Plainsboro, New Jersey, March 8-11, 1994

1993

pdf
Example-Based Correction of Word Segmentation and Part of Speech Labelling
Tomoyoshi Matsukawa | Scott Miller | Ralph Weischedel
Human Language Technology: Proceedings of a Workshop Held at Plainsboro, New Jersey, March 21-24, 1993

pdf
Session 13: New Directions
Ralph Weischedel
Human Language Technology: Proceedings of a Workshop Held at Plainsboro, New Jersey, March 21-24, 1993

pdf
Robustness, Portability, and Scalability of Natural Language Systems
Ralph Weischedel
Human Language Technology: Proceedings of a Workshop Held at Plainsboro, New Jersey, March 21-24, 1993

pdf
Coping with Ambiguity and Unknown Words through Probabilistic Models
Ralph Weischedel | Marie Meteer | Richard Schwartz | Lance Ramshaw | Jeff Palmucci
Computational Linguistics, Volume 19, Number 2, June 1993, Special Issue on Using Large Corpora: II

pdf
BBN’s PLUM Probabilistic Language Understanding System
Ralph Weischedel | Damaris Ayuso | Heidi Fox | Tomoyoshi Matsukawa | Constantine Papageorgiou | Dawn MacLaughlin | Masaichiro Kitagawa | Tsutomu Sakai | June Abe | Hiroto Hosiho | Yoichi Miyamoto | Scott Miller
TIPSTER TEXT PROGRAM: PHASE I: Proceedings of a Workshop held at Fredricksburg, Virginia, September 19-23, 1993

pdf
BBN: Description of the PLUM System as Used for MUC-5
Ralph Weischedel | Damaris Ayuso | Sean Boisen | Heidi Fox | Robert Ingria | Tomoyoshi Matsukawa | Constantine Papageorgiou | Dawn MacLaughlin | Masaichiro Kitagawa | Tsutomu Sakai | June Abe | Hiroto Hosiho | Yoichi Miyamoto | Scott Miller
Fifth Message Understanding Conference (MUC-5): Proceedings of a Conference Held in Baltimore, Maryland, August 25-27, 1993

1992

pdf
BBN PLUM: MUC-4 Test Results and Analysis
Ralph Weischedel | Damaris Ayuso | Sean Boisen | Heidi Fox | Herbert Gish | Robert Ingria
Fourth Message Understanding Conference (MUC-4): Proceedings of a Conference Held in McLean, Virginia, June 16-18, 1992

pdf
BBN: Description of the PLUM System as Used for MUC-4
Damaris Ayuso | Sean Boisen | Heidi Fox | Herb Gish | Robert Ingria | Ralph Weischedel
Fourth Message Understanding Conference (MUC-4): Proceedings of a Conference Held in McLean, Virginia, June 16-18, 1992

pdf
A New Approach to Text Understanding
Ralph Weischedel | Damaris Ayuso | Sean Boisen | Heidi Fox | Robert Ingria
Speech and Natural Language: Proceedings of a Workshop Held at Harriman, New York, February 23-26, 1992

pdf
Robustness, Portability, and Scalability Language Systems
Ralph Weischedel
Speech and Natural Language: Proceedings of a Workshop Held at Harriman, New York, February 23-26, 1992

1991

pdf
Partial Parsing: A Report on Work in Progress
Ralph Weischedel | Damaris Ayuso | R. Bobrow | Sean Boisen | Robert Ingria | Jeff Palmucci
Speech and Natural Language: Proceedings of a Workshop Held at Pacific Grove, California, February 19-22, 1991

pdf
Studies in Part of Speech Labelling
Marie Meteer | Richard Schwartz | Ralph Weischedel
Speech and Natural Language: Proceedings of a Workshop Held at Pacific Grove, California, February 19-22, 1991

pdf
Adaptive Natural Language Processing
Ralph Weischedel
Speech and Natural Language: Proceedings of a Workshop Held at Pacific Grove, California, February 19-22, 1991

pdf
BBN PLUM: MUC-3 Test Results and Analysis
Ralph Weischedel | Damaris Ayuso | Sean Boisen | Robert Ingria | Jeff Palmucci
Third Message Understanding Conference (MUC-3): Proceedings of a Conference Held in San Diego, California, May 21-23, 1991

pdf
BBN: Description of the PLUM System as Used forMUC-3
Ralph Weischedel | Damaris Ayuso | Sean Boisen | Robert Ingria | Jeff Palmucci
Third Message Understanding Conference (MUC-3): Proceedings of a Conference Held in San Diego, California, May 21-23, 1991

1990

pdf
Multiple Underlying Systems: Translating User Requests into Programs to Produce Answers
Robert J. Bobrow | Philip Resnik | Ralph M. Weischedel
28th Annual Meeting of the Association for Computational Linguistics

pdf
Towards Understanding Text with a Very Large Vocabulary
Damaris Ayuso | R. Bobrow | Dawn MacLaughlin | Marie Meteer | Lance Ramshaw | Rich Schwartz | Ralph Weischedel
Speech and Natural Language: Proceedings of a Workshop Held at Hidden Valley, Pennsylvania, June 24-27,1990

pdf
Adaptive Natural Language Processing
Ralph Weischedel
Speech and Natural Language: Proceedings of a Workshop Held at Hidden Valley, Pennsylvania, June 24-27,1990

1989

pdf
Portability in the Janus Natural Language Interface
Ralph M. Weischedel | Robert J. Bobrow | Damaris Ayuso | Lance Ramshaw
Speech and Natural Language: Proceedings of a Workshop Held at Philadelphia, Pennsylvania, February 21-23, 1989

pdf
Research and Development in Natural Language Understanding
Ralph Weischedel
Speech and Natural Language: Proceedings of a Workshop Held at Cape Cod, Massachusetts, October 15-18, 1989

pdf
White Paper on Natural Language Processing
Ralph Weischedel | Jaime Carbonell | Barbara Grosz | Wendy Lehnert | Mitchell Marcus | Raymond Perrault | Robert Wilensky
Speech and Natural Language: Proceedings of a Workshop Held at Cape Cod, Massachusetts, October 15-18, 1989

pdf
A Hybrid Approach to Representation in the Janus Natural Language Processor
Ralph M. Weischedel
27th Annual Meeting of the Association for Computational Linguistics

1987

pdf
An Environment for Acquiring Semantic Information
Damaris M. Ayuso | Varda Shaked | Ralph M. Weischedel
25th Annual Meeting of the Association for Computational Linguistics

1986

pdf bib
Research and Development in Natural Language Processing at BBN Laboratories in the Strategic Computing Program
Ralph Weischedel | Remko Scha | Edward Walker | Damaris Ayuso | Andrew Haas | Erhard Hinrichs | Robert Ingria | Lance Ramshaw | Varda Shaked | David Stallard
Strategic Computing - Natural Language Workshop: Proceedings of a Workshop Held at Marina del Rey, California, May 1-2, 1986

pdf
Out of the Laboratory: A Case Study with the IRUS Natural Language Interface
Ralph M. Weischedel | Edward Walker | Damaris Ayuso | Jos de Bruin | Kimberle Koile | Lance Ramshaw | Varda Shaked
Strategic Computing - Natural Language Workshop: Proceedings of a Workshop Held at Marina del Rey, California, May 1-2, 1986

pdf
Living Up to Expectations: Computing Expert Responses
Aravind Joshi | Bonnie Webber | Ralph M. Weischedel
Strategic Computing - Natural Language Workshop: Proceedings of a Workshop Held at Marina del Rey, California, May 1-2, 1986

1985


Reflections on the Knowledge Needed to Process Ill-Formed Language
Ralph M. Weischedel
Proceedings of the first Conference on Theoretical and Methodological Issues in Machine Translation of Natural Languages

1984

pdf
Semantic Interpretation Using KL-ONE
Norman K. Sondheimer | Ralph M. Weischedel | Robert J. Bobrow
10th International Conference on Computational Linguistics and 22nd Annual Meeting of the Association for Computational Linguistics

pdf
Preventing False Inferences
Aravind Joshi | Bonnie Webber | Ralph M. Weischedel
10th International Conference on Computational Linguistics and 22nd Annual Meeting of the Association for Computational Linguistics

pdf
Problem Localization Strategies for Pramatics Processing in Natural-Language Front Ends
Lance A. Ramshaw | Ralph M. Weischedel
10th International Conference on Computational Linguistics and 22nd Annual Meeting of the Association for Computational Linguistics

1983

pdf
Handling Ill-Formed Input: Session Introduction
Ralph M. Weischedel
First Conference on Applied Natural Language Processing

pdf
Meta-rules as a Basis for Processing Ill-Formed input
Ralph M. Weischedel | Norman K. Sondheimer
American Journal of Computational Linguistics, Volume 9, Number 3-4, July-December 1983

1982

pdf
An Improved Heuristic for Ellipsis Processing
Ralph M. Weischedel | Norman K. Sondheimer
20th Annual Meeting of the Association for Computational Linguistics

1980

pdf
If The Parser Fails
Ralph M. Weischedel | John E. Black
18th Annual Meeting of the Association for Computational Linguistics

pdf
A Rule-Based Approach to Ill-Formed Input
Norman K. Sondheimer | Ralph M. Weischedel
COLING 1980 Volume 1: The 8th International Conference on Computational Linguistics

pdf
Responding Intelligently to Unparsable Inputs
Ralph M. Weischedel | John E. Black
American Journal of Computational Linguistics, Volume 6, Number 2, April-June 1980

1977

pdf
Computation of a Subclass of Inferences: Presupposition and Entailment
Aravind K. Joshi | Ralph Weischedel
American Journal of Computational Linguistics (February 1977)

Search
Co-authors