Raymond Mooney

Also published as: Raymond J. Mooney


2023

pdf
Text-to-SQL Error Correction with Language Models of Code
Ziru Chen | Shijie Chen | Michael White | Raymond Mooney | Ali Payani | Jayanth Srinivasa | Yu Su | Huan Sun
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)

Despite recent progress in text-to-SQL parsing, current semantic parsers are still not accurate enough for practical use. In this paper, we investigate how to build automatic text-to-SQL error correction models. Noticing that token-level edits are out of context and sometimes ambiguous, we propose building clause-level edit models instead. Besides, while most language models of code are not specifically pre-trained for SQL, they know common data structures and their operations in programming languages such as Python. Thus, we propose a novel representation for SQL queries and their edits that adheres more closely to the pre-training corpora of language models of code. Our error correction model improves the exact set match accuracy of different parsers by 2.4-6.5 and obtains up to 4.3 point absolute improvement over two strong baselines.

pdf
“Female Astronaut: Because sandwiches won’t make themselves up there”: Towards Multimodal misogyny detection in memes
Smriti Singh | Amritha Haridasan | Raymond Mooney
The 7th Workshop on Online Abuse and Harms (WOAH)

A rise in the circulation of memes has led to the spread of a new form of multimodal hateful content. Unfortunately, the degree of hate women receive on the internet is disproportionately skewed against them. This, combined with the fact that multimodal misogyny is more challenging to detect as opposed to traditional text-based misogyny, signifies that the task of identifying misogynistic memes online is one of utmost importance. To this end, the MAMI dataset was released, consisting of 12000 memes annotated for misogyny and four sub-classes of misogyny - shame, objectification, violence and stereotype. While this balanced dataset is widely cited, we find that the task itself remains largely unsolved. Thus, in our work, we investigate the performance of multiple models in an effort to analyse whether domain specific pretraining helps model performance. We also investigate why even state of the art models find this task so challenging, and whether domain-specific pretraining can help. Our results show that pretraining BERT on hateful memes and leveraging an attention based approach with ViT outperforms state of the art models by more than 10%. Further, we provide insight into why these models may be struggling with this task with an extensive qualitative analysis of random samples from the test set.

pdf
Using Planning to Improve Semantic Parsing of Instructional Texts
Vanya Cohen | Raymond Mooney
Proceedings of the 1st Workshop on Natural Language Reasoning and Structured Explanations (NLRSE)

We develop a symbolic planning-based decoder to improve the few-shot semantic parsing of instructional texts. The system takes long-form instructional texts as input and produces sequences of actions in a formal language that enable execution of the instructions. This task poses unique challenges since input texts may contain long context dependencies and ambiguous and domain-specific language. Valid semantic parses also require sequences of steps that constitute an executable plan. We build on recent progress in semantic parsing by leveraging large language models to learn parsers from small amounts of training data. During decoding, our method employs planning methods and domain information to rank and correct candidate parses. To validate our method, we evaluate on four domains: two household instruction-following domains and two cooking recipe interpretation domains. We present results for few-shot semantic parsing using leave-one-out cross-validation. We show that utilizing planning domain information improves the quality of generated plans. Through ablations we also explore the effects of our decoder design choices.

2022

pdf
Using Developer Discussions to Guide Fixing Bugs in Software
Sheena Panthaplackel | Milos Gligoric | Junyi Jessy Li | Raymond Mooney
Findings of the Association for Computational Linguistics: EMNLP 2022

Automatically fixing software bugs is a challenging task. While recent work showed that natural language context is useful in guiding bug-fixing models, the approach required prompting developers to provide this context, which was simulated through commit messages written after the bug-fixing code changes were made. We instead propose using bug report discussions, which are available before the task is performed and are also naturally occurring, avoiding the need for any additional information from developers. For this, we augment standard bug-fixing datasets with bug report discussions. Using these newly compiled datasets, we demonstrate that various forms of natural language context derived from such discussions can aid bug-fixing, even leading to improved performance over using commit messages corresponding to the oracle bug-fixing commits.

pdf
Using Commonsense Knowledge to Answer Why-Questions
Yash Kumar Lal | Niket Tandon | Tanvi Aggarwal | Horace Liu | Nathanael Chambers | Raymond Mooney | Niranjan Balasubramanian
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing

Answering questions in narratives about why events happened often requires commonsense knowledge external to the text. What aspects of this knowledge are available in large language models? What aspects can be made accessible via external commonsense resources? We study these questions in the context of answering questions in the TellMeWhy dataset using COMET as a source of relevant commonsense relations. We analyze the effects of model size (T5 and GPT3) along with methods of injecting knowledge (COMET) into these models. Results show that the largest models, as expected, yield substantial improvements over base models. Injecting external knowledge helps models of various sizes, but the amount of improvement decreases with larger model size. We also find that the format in which knowledge is provided is critical, and that smaller models benefit more from larger amounts of knowledge. Finally, we develop an ontology of knowledge types and analyze the relative coverage of the models across these categories.

pdf
Entity-Focused Dense Passage Retrieval for Outside-Knowledge Visual Question Answering
Jialin Wu | Raymond Mooney
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing

Most Outside-Knowledge Visual Question Answering (OK-VQA) systems employ a two-stage framework that first retrieves external knowledge given the visual question and then predicts the answer based on the retrieved content. However, the retrieved knowledge is often inadequate. Retrievals are frequently too general and fail to cover specific knowledge needed to answer the question. Also, the naturally available supervision (whether the passage contains the correct answer) is weak and does not guarantee question relevancy. To address these issues, we propose an Entity-Focused Retrieval (EnFoRe) model that provides stronger supervision during training and recognizes question-relevant entities to help retrieve more specific knowledge. Experiments show that our EnFoRe model achieves superior retrieval performance on OK-VQA, the currently largest outside-knowledge VQA dataset. We also combine the retrieved knowledge with state-of-the-art VQA models, and achieve a new state-of-the-art performance on OK-VQA.

2021

pdf
TellMeWhy: A Dataset for Answering Why-Questions in Narratives
Yash Kumar Lal | Nathanael Chambers | Raymond Mooney | Niranjan Balasubramanian
Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021

2020

pdf
Systematic Generalization on gSCAN with Language Conditioned Embedding
Tong Gao | Qi Huang | Raymond Mooney
Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing

Systematic Generalization refers to a learning algorithm’s ability to extrapolate learned behavior to unseen situations that are distinct but semantically similar to its training data. As shown in recent work, state-of-the-art deep learning models fail dramatically even on tasks for which they are designed when the test set is systematically different from the training data. We hypothesize that explicitly modeling the relations between objects in their contexts while learning their representations will help achieve systematic generalization. Therefore, we propose a novel method that learns objects’ contextualized embeddings with dynamic message passing conditioned on the input natural language and end-to-end trainable with other downstream deep learning modules. To our knowledge, this model is the first one that significantly outperforms the provided baseline and reaches state-of-the-art performance on grounded SCAN (gSCAN), a grounded natural language navigation dataset designed to require systematic generalization in its test splits.

pdf
Learning to Update Natural Language Comments Based on Code Changes
Sheena Panthaplackel | Pengyu Nie | Milos Gligoric | Junyi Jessy Li | Raymond Mooney
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics

We formulate the novel task of automatically updating an existing natural language comment based on changes in the body of code it accompanies. We propose an approach that learns to correlate changes across two distinct language representations, to generate a sequence of edits that are applied to the existing comment to reflect the source code modifications. We train and evaluate our model using a dataset that we collected from commit histories of open-source software projects, with each example consisting of a concurrent update to a method and its corresponding comment. We compare our approach against multiple baselines using both automatic metrics and human evaluation. Results reflect the challenge of this task and that our model outperforms baselines with respect to making edits.

2019

pdf
Generating Question Relevant Captions to Aid Visual Question Answering
Jialin Wu | Zeyuan Hu | Raymond Mooney
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics

Visual question answering (VQA) and image captioning require a shared body of general knowledge connecting language and vision. We present a novel approach to better VQA performance that exploits this connection by jointly generating captions that are targeted to help answer a specific visual question. The model is trained using an existing caption dataset by automatically determining question-relevant captions using an online gradient-based method. Experimental results on the VQA v2 challenge demonstrates that our approach obtains state-of-the-art VQA performance (e.g. 68.4% in the Test-standard set using a single model) by simultaneously generating question-relevant captions.

pdf
Do Human Rationales Improve Machine Explanations?
Julia Strout | Ye Zhang | Raymond Mooney
Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP

Work on “learning with rationales” shows that humans providing explanations to a machine learning system can improve the system’s predictive accuracy. However, this work has not been connected to work in “explainable AI” which concerns machines explaining their reasoning to humans. In this work, we show that learning with rationales can also improve the quality of the machine’s explanations as evaluated by human judges. Specifically, we present experiments showing that, for CNN-based text classification, explanations generated using “supervised attention” are judged superior to explanations generated using normal unsupervised attention.

pdf
Faithful Multimodal Explanation for Visual Question Answering
Jialin Wu | Raymond Mooney
Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP

AI systems’ ability to explain their reasoning is critical to their utility and trustworthiness. Deep neural networks have enabled significant progress on many challenging problems such as visual question answering (VQA). However, most of them are opaque black boxes with limited explanatory capability. This paper presents a novel approach to developing a high-performing VQA system that can elucidate its answers with integrated textual and visual explanations that faithfully reflect important aspects of its underlying reasoning while capturing the style of comprehensible human explanations. Extensive experimental evaluation demonstrates the advantages of this approach compared to competing methods using both automated metrics and human evaluation.

2018

pdf
Stacking with Auxiliary Features for Visual Question Answering
Nazneen Fatema Rajani | Raymond Mooney
Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers)

Visual Question Answering (VQA) is a well-known and challenging task that requires systems to jointly reason about natural language and vision. Deep learning models in various forms have been the standard for solving VQA. However, some of these VQA models are better at certain types of image-question pairs than other models. Ensembling VQA models intelligently to leverage their diverse expertise is, therefore, advantageous. Stacking With Auxiliary Features (SWAF) is an intelligent ensembling technique which learns to combine the results of multiple models using features of the current problem as context. We propose four categories of auxiliary features for ensembling for VQA. Three out of the four categories of features can be inferred from an image-question pair and do not require querying the component models. The fourth category of auxiliary features uses model-specific explanations. In this paper, we describe how we use these various categories of auxiliary features to improve performance for VQA. Using SWAF to effectively ensemble three recent systems, we obtain a new state-of-the-art. Our work also highlights the advantages of explainable AI models.

pdf
Learning a Policy for Opportunistic Active Learning
Aishwarya Padmakumar | Peter Stone | Raymond Mooney
Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing

Active learning identifies data points to label that are expected to be the most useful in improving a supervised model. Opportunistic active learning incorporates active learning into interactive tasks that constrain possible queries during interactions. Prior work has shown that opportunistic active learning can be used to improve grounding of natural language descriptions in an interactive object retrieval task. In this work, we use reinforcement learning for such an object retrieval task, to learn a policy that effectively trades off task completion with model improvement that would benefit future tasks.

2017

pdf
Integrated Learning of Dialog Strategies and Semantic Parsing
Aishwarya Padmakumar | Jesse Thomason | Raymond J. Mooney
Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers

Natural language understanding and dialog management are two integral components of interactive dialog systems. Previous research has used machine learning techniques to individually optimize these components, with different forms of direct and indirect supervision. We present an approach to integrate the learning of both a dialog strategy using reinforcement learning, and a semantic parser for robust natural language understanding, using only natural dialog interaction for supervision. Experimental results on a simulated task of robot instruction demonstrate that joint learning of both components improves dialog performance over learning either of these components alone.

pdf
Guiding Interaction Behaviors for Multi-modal Grounded Language Learning
Jesse Thomason | Jivko Sinapov | Raymond Mooney
Proceedings of the First Workshop on Language Grounding for Robotics

Multi-modal grounded language learning connects language predicates to physical properties of objects in the world. Sensing with multiple modalities, such as audio, haptics, and visual colors and shapes while performing interaction behaviors like lifting, dropping, and looking on objects enables a robot to ground non-visual predicates like “empty” as well as visual predicates like “red”. Previous work has established that grounding in multi-modal space improves performance on object retrieval from human descriptions. In this work, we gather behavior annotations from humans and demonstrate that these improve language grounding performance by allowing a system to focus on relevant behaviors for words like “white” or “half-full” that can be understood by looking or lifting, respectively. We also explore adding modality annotations (whether to focus on audio or haptics when performing a behavior), which improves performance, and sharing information between linguistically related predicates (if “green” is a color, “white” is a color), which improves grounding recall but at the cost of precision.

pdf
Leveraging Discourse Information Effectively for Authorship Attribution
Elisa Ferracane | Su Wang | Raymond Mooney
Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 1: Long Papers)

We explore techniques to maximize the effectiveness of discourse information in the task of authorship attribution. We present a novel method to embed discourse features in a Convolutional Neural Network text classifier, which achieves a state-of-the-art result by a significant margin. We empirically investigate several featurization methods to understand the conditions under which discourse features contribute non-trivial performance gains, and analyze discourse embeddings.

pdf
Improving Black-box Speech Recognition using Semantic Parsing
Rodolfo Corona | Jesse Thomason | Raymond Mooney
Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 2: Short Papers)

Speech is a natural channel for human-computer interaction in robotics and consumer applications. Natural language understanding pipelines that start with speech can have trouble recovering from speech recognition errors. Black-box automatic speech recognition (ASR) systems, built for general purpose use, are unable to take advantage of in-domain language models that could otherwise ameliorate these errors. In this work, we present a method for re-ranking black-box ASR hypotheses using an in-domain language model and semantic parser trained for a particular task. Our re-ranking method significantly improves both transcription accuracy and semantic understanding over a state-of-the-art ASR’s vanilla output.

pdf
Dialog for Language to Code
Shobhit Chaurasia | Raymond J. Mooney
Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 2: Short Papers)

Generating computer code from natural language descriptions has been a long-standing problem. Prior work in this domain has restricted itself to generating code in one shot from a single description. To overcome this limitation, we propose a system that can engage users in a dialog to clarify their intent until it has all the information to produce correct code. To evaluate the efficacy of dialog in code generation, we focus on synthesizing conditional statements in the form of IFTTT recipes.

2016

pdf
Combining Supervised and Unsupervised Enembles for Knowledge Base Population
Nazneen Fatema Rajani | Raymond Mooney
Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing

pdf
Improving LSTM-based Video Description with Linguistic Knowledge Mined from Text
Subhashini Venugopalan | Lisa Anne Hendricks | Raymond Mooney | Kate Saenko
Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing

pdf
Using Sentence-Level LSTM Language Models for Script Inference
Karl Pichotta | Raymond J. Mooney
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

pdf
Statistical Script Learning with Recurrent Neural Networks
Karl Pichotta | Raymond Mooney
Proceedings of the Workshop on Uphill Battles in Language Processing: Scaling Early Achievements to Robust Methods

pdf
Representing Meaning with a Combination of Logical and Distributional Models
I. Beltagy | Stephen Roller | Pengxiang Cheng | Katrin Erk | Raymond J. Mooney
Computational Linguistics, Volume 42, Issue 4 - December 2016

2015

pdf
Translating Videos to Natural Language Using Deep Recurrent Neural Networks
Subhashini Venugopalan | Huijuan Xu | Jeff Donahue | Marcus Rohrbach | Raymond Mooney | Kate Saenko
Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies

pdf
Stacked Ensembles of Information Extractors for Knowledge-Base Population
Vidhoon Viswanathan | Nazneen Fatema Rajani | Yinon Bentor | Raymond Mooney
Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)

pdf
Language to Code: Learning Semantic Parsers for If-This-Then-That Recipes
Chris Quirk | Raymond Mooney | Michel Galley
Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)

2014

pdf
Statistical Script Learning with Multi-Argument Events
Karl Pichotta | Raymond Mooney
Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics

pdf bib
Semantic Parsing using Distributional Semantics and Probabilistic Logic
Islam Beltagy | Katrin Erk | Raymond Mooney
Proceedings of the ACL 2014 Workshop on Semantic Parsing

pdf
Probabilistic Soft Logic for Semantic Textual Similarity
Islam Beltagy | Katrin Erk | Raymond Mooney
Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

pdf
UTexas: Natural Language Semantics using Distributional Semantics and Probabilistic Logic
Islam Beltagy | Stephen Roller | Gemma Boleda | Katrin Erk | Raymond Mooney
Proceedings of the 8th International Workshop on Semantic Evaluation (SemEval 2014)

pdf
Integrating Language and Vision to Generate Natural Language Descriptions of Videos in the Wild
Jesse Thomason | Subhashini Venugopalan | Sergio Guadarrama | Kate Saenko | Raymond Mooney
Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers

2013

pdf
Detecting Promotional Content in Wikipedia
Shruti Bhosale | Heath Vinicombe | Raymond Mooney
Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing

pdf bib
Generating Natural-Language Video Descriptions Using Text-Mined Knowledge
Niveda Krishnamoorthy | Girish Malkarnenkar | Raymond Mooney | Kate Saenko | Sergio Guadarrama
Proceedings of the Workshop on Vision and Natural Language Processing

pdf bib
Montague Meets Markov: Deep Semantics with Probabilistic Logical Form
Islam Beltagy | Cuong Chau | Gemma Boleda | Dan Garrette | Katrin Erk | Raymond Mooney
Second Joint Conference on Lexical and Computational Semantics (*SEM), Volume 1: Proceedings of the Main Conference and the Shared Task: Semantic Textual Similarity

pdf
Adapting Discriminative Reranking to Grounded Language Learning
Joohyun Kim | Raymond Mooney
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

2012

pdf
Unsupervised PCFG Induction for Grounded Language Learning with Highly Ambiguous Supervision
Joohyun Kim | Raymond Mooney
Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning

pdf
Learning to “Read Between the Lines” using Bayesian Logic Programs
Sindhu Raghavan | Raymond Mooney | Hyeonseo Ku
Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

pdf
Learning Language from Perceptual Context
Raymond Mooney
Proceedings of the 13th Conference of the European Chapter of the Association for Computational Linguistics

2011

pdf
Implementing Weighted Abduction in Markov Logic
James Blythe | Jerry Hobbs | Pedro Domingos | Rohit Kate | Raymond Mooney
Proceedings of the Ninth International Conference on Computational Semantics (IWCS 2011)

pdf
Integrating Logical Representations with Probabilistic Information using Markov Logic
Dan Garrette | Katrin Erk | Raymond Mooney
Proceedings of the Ninth International Conference on Computational Semantics (IWCS 2011)

pdf
Cross-Cutting Models of Lexical Semantics
Joseph Reisinger | Raymond Mooney
Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing

2010

pdf
A Mixture Model with Sharing for Lexical Semantics
Joseph Reisinger | Raymond Mooney
Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing

pdf
Learning to Predict Readability using Diverse Linguistic Features
Rohit Kate | Xiaoqiang Luo | Siddharth Patwardhan | Martin Franz | Radu Florian | Raymond Mooney | Salim Roukos | Chris Welty
Proceedings of the 23rd International Conference on Computational Linguistics (Coling 2010)

pdf
Generative Alignment and Semantic Parsing for Learning from Ambiguous Supervision
Joohyun Kim | Raymond Mooney
Coling 2010: Posters

pdf
Multi-Prototype Vector-Space Models of Word Meaning
Joseph Reisinger | Raymond J. Mooney
Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics

pdf
Authorship Attribution Using Probabilistic Context-Free Grammars
Sindhu Raghavan | Adriana Kovashka | Raymond Mooney
Proceedings of the ACL 2010 Conference Short Papers

pdf
Joint Entity and Relation Extraction Using Card-Pyramid Parsing
Rohit J. Kate | Raymond Mooney
Proceedings of the Fourteenth Conference on Computational Natural Language Learning

2009

pdf
Learning a Compositional Semantic Parser using an Existing Syntactic Parser
Ruifang Ge | Raymond Mooney
Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP

2007

pdf
Learning Synchronous Grammars for Semantic Parsing with Lambda Calculus
Yuk Wah Wong | Raymond Mooney
Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics

pdf
Learning to Extract Relations from the Web using Minimal Supervision
Razvan Bunescu | Raymond Mooney
Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics

pdf
Generation by Inverting a Semantic Parser that Uses Statistical Machine Translation
Yuk Wah Wong | Raymond Mooney
Human Language Technologies 2007: The Conference of the North American Chapter of the Association for Computational Linguistics; Proceedings of the Main Conference

pdf
Semi-Supervised Learning for Semantic Parsing using Support Vector Machines
Rohit Kate | Raymond Mooney
Human Language Technologies 2007: The Conference of the North American Chapter of the Association for Computational Linguistics; Companion Volume, Short Papers

2006

pdf
Using String-Kernels for Learning Semantic Parsers
Rohit J. Kate | Raymond J. Mooney
Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics

pdf
Discriminative Reranking for Semantic Parsing
Ruifang Ge | Raymond J. Mooney
Proceedings of the COLING/ACL 2006 Main Conference Poster Sessions

pdf
Learning for Semantic Parsing with Statistical Machine Translation
Yuk Wah Wong | Raymond Mooney
Proceedings of the Human Language Technology Conference of the NAACL, Main Conference

pdf
Integrating Co-occurrence Statistics with Information Extraction for Robust Retrieval of Protein Interactions from Medline
Razvan Bunescu | Raymond Mooney | Arun Ramani | Edward Marcotte
Proceedings of the HLT-NAACL BioNLP Workshop on Linking Natural Language and Biology

2005

pdf bib
Proceedings of Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing
Raymond Mooney | Chris Brew | Lee-Feng Chien | Katrin Kirchhoff
Proceedings of Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing

pdf
A Shortest Path Dependency Kernel for Relation Extraction
Razvan Bunescu | Raymond Mooney
Proceedings of Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing

pdf bib
A Statistical Semantic Parser that Integrates Syntax and Semantics
Ruifang Ge | Raymond Mooney
Proceedings of the Ninth Conference on Computational Natural Language Learning (CoNLL-2005)

pdf
Using Biomedical Literature Mining to Consolidate the Set of Known Human Protein-Protein Interactions
Arun Ramani | Razvan Bunescu | Raymond Mooney | Edward Marcotte
Proceedings of the ACL-ISMB Workshop on Linking Biological Literature, Ontologies and Databases: Mining Biological Semantics

2004

pdf
Collective Information Extraction with Relational Markov Networks
Razvan Bunescu | Raymond Mooney
Proceedings of the 42nd Annual Meeting of the Association for Computational Linguistics (ACL-04)

2000

pdf
Automated Construction of Database Interfaces: Intergrating Statistical and Relational Learning for Semantic Parsing
Lappoon R. Tang | Raymond J. Mooney
2000 Joint SIGDAT Conference on Empirical Methods in Natural Language Processing and Very Large Corpora

1998

pdf
Semantic Lexicon Acquisition for Learning Natural Language Interfaces
Cynthia A. Thompson | Raymond J. Mooney
Sixth Workshop on Very Large Corpora

1997

pdf
Learning Parse and Translation Decisions from Examples with Rich Context
Ulf Hermjakob | Raymond J. Mooney
35th Annual Meeting of the Association for Computational Linguistics and 8th Conference of the European Chapter of the Association for Computational Linguistics

pdf bib
Relational Learning of Pattern-Match Rules for Information Extraction
Mary Elaine Califf | Raymond J. Mooney
CoNLL97: Computational Natural Language Learning

1996

pdf
Comparative Experiments on Disambiguating Word Senses: An Illustration of the Role of Bias in Machine Learning
Raymond J. Mooney
Conference on Empirical Methods in Natural Language Processing