Raymond Mooney

Also published as: Raymond J. Mooney


2021

pdf bib
TellMeWhy: A Dataset for Answering Why-Questions in Narratives
Yash Kumar Lal | Nathanael Chambers | Raymond Mooney | Niranjan Balasubramanian
Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021

2020

pdf bib
Systematic Generalization on gSCAN with Language Conditioned Embedding
Tong Gao | Qi Huang | Raymond Mooney
Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing

Systematic Generalization refers to a learning algorithm’s ability to extrapolate learned behavior to unseen situations that are distinct but semantically similar to its training data. As shown in recent work, state-of-the-art deep learning models fail dramatically even on tasks for which they are designed when the test set is systematically different from the training data. We hypothesize that explicitly modeling the relations between objects in their contexts while learning their representations will help achieve systematic generalization. Therefore, we propose a novel method that learns objects’ contextualized embeddings with dynamic message passing conditioned on the input natural language and end-to-end trainable with other downstream deep learning modules. To our knowledge, this model is the first one that significantly outperforms the provided baseline and reaches state-of-the-art performance on grounded SCAN (gSCAN), a grounded natural language navigation dataset designed to require systematic generalization in its test splits.

pdf bib
Learning to Update Natural Language Comments Based on Code Changes
Sheena Panthaplackel | Pengyu Nie | Milos Gligoric | Junyi Jessy Li | Raymond Mooney
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics

We formulate the novel task of automatically updating an existing natural language comment based on changes in the body of code it accompanies. We propose an approach that learns to correlate changes across two distinct language representations, to generate a sequence of edits that are applied to the existing comment to reflect the source code modifications. We train and evaluate our model using a dataset that we collected from commit histories of open-source software projects, with each example consisting of a concurrent update to a method and its corresponding comment. We compare our approach against multiple baselines using both automatic metrics and human evaluation. Results reflect the challenge of this task and that our model outperforms baselines with respect to making edits.

2019

pdf bib
Generating Question Relevant Captions to Aid Visual Question Answering
Jialin Wu | Zeyuan Hu | Raymond Mooney
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics

Visual question answering (VQA) and image captioning require a shared body of general knowledge connecting language and vision. We present a novel approach to better VQA performance that exploits this connection by jointly generating captions that are targeted to help answer a specific visual question. The model is trained using an existing caption dataset by automatically determining question-relevant captions using an online gradient-based method. Experimental results on the VQA v2 challenge demonstrates that our approach obtains state-of-the-art VQA performance (e.g. 68.4% in the Test-standard set using a single model) by simultaneously generating question-relevant captions.

pdf bib
Do Human Rationales Improve Machine Explanations?
Julia Strout | Ye Zhang | Raymond Mooney
Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP

Work on “learning with rationales” shows that humans providing explanations to a machine learning system can improve the system’s predictive accuracy. However, this work has not been connected to work in “explainable AI” which concerns machines explaining their reasoning to humans. In this work, we show that learning with rationales can also improve the quality of the machine’s explanations as evaluated by human judges. Specifically, we present experiments showing that, for CNN-based text classification, explanations generated using “supervised attention” are judged superior to explanations generated using normal unsupervised attention.

pdf bib
Faithful Multimodal Explanation for Visual Question Answering
Jialin Wu | Raymond Mooney
Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP

AI systems’ ability to explain their reasoning is critical to their utility and trustworthiness. Deep neural networks have enabled significant progress on many challenging problems such as visual question answering (VQA). However, most of them are opaque black boxes with limited explanatory capability. This paper presents a novel approach to developing a high-performing VQA system that can elucidate its answers with integrated textual and visual explanations that faithfully reflect important aspects of its underlying reasoning while capturing the style of comprehensible human explanations. Extensive experimental evaluation demonstrates the advantages of this approach compared to competing methods using both automated metrics and human evaluation.

2018

pdf bib
Learning a Policy for Opportunistic Active Learning
Aishwarya Padmakumar | Peter Stone | Raymond Mooney
Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing

Active learning identifies data points to label that are expected to be the most useful in improving a supervised model. Opportunistic active learning incorporates active learning into interactive tasks that constrain possible queries during interactions. Prior work has shown that opportunistic active learning can be used to improve grounding of natural language descriptions in an interactive object retrieval task. In this work, we use reinforcement learning for such an object retrieval task, to learn a policy that effectively trades off task completion with model improvement that would benefit future tasks.

pdf bib
Stacking with Auxiliary Features for Visual Question Answering
Nazneen Fatema Rajani | Raymond Mooney
Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers)

Visual Question Answering (VQA) is a well-known and challenging task that requires systems to jointly reason about natural language and vision. Deep learning models in various forms have been the standard for solving VQA. However, some of these VQA models are better at certain types of image-question pairs than other models. Ensembling VQA models intelligently to leverage their diverse expertise is, therefore, advantageous. Stacking With Auxiliary Features (SWAF) is an intelligent ensembling technique which learns to combine the results of multiple models using features of the current problem as context. We propose four categories of auxiliary features for ensembling for VQA. Three out of the four categories of features can be inferred from an image-question pair and do not require querying the component models. The fourth category of auxiliary features uses model-specific explanations. In this paper, we describe how we use these various categories of auxiliary features to improve performance for VQA. Using SWAF to effectively ensemble three recent systems, we obtain a new state-of-the-art. Our work also highlights the advantages of explainable AI models.

2017

pdf bib
Leveraging Discourse Information Effectively for Authorship Attribution
Elisa Ferracane | Su Wang | Raymond Mooney
Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 1: Long Papers)

We explore techniques to maximize the effectiveness of discourse information in the task of authorship attribution. We present a novel method to embed discourse features in a Convolutional Neural Network text classifier, which achieves a state-of-the-art result by a significant margin. We empirically investigate several featurization methods to understand the conditions under which discourse features contribute non-trivial performance gains, and analyze discourse embeddings.

pdf bib
Improving Black-box Speech Recognition using Semantic Parsing
Rodolfo Corona | Jesse Thomason | Raymond Mooney
Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 2: Short Papers)

Speech is a natural channel for human-computer interaction in robotics and consumer applications. Natural language understanding pipelines that start with speech can have trouble recovering from speech recognition errors. Black-box automatic speech recognition (ASR) systems, built for general purpose use, are unable to take advantage of in-domain language models that could otherwise ameliorate these errors. In this work, we present a method for re-ranking black-box ASR hypotheses using an in-domain language model and semantic parser trained for a particular task. Our re-ranking method significantly improves both transcription accuracy and semantic understanding over a state-of-the-art ASR’s vanilla output.

pdf bib
Dialog for Language to Code
Shobhit Chaurasia | Raymond J. Mooney
Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 2: Short Papers)

Generating computer code from natural language descriptions has been a long-standing problem. Prior work in this domain has restricted itself to generating code in one shot from a single description. To overcome this limitation, we propose a system that can engage users in a dialog to clarify their intent until it has all the information to produce correct code. To evaluate the efficacy of dialog in code generation, we focus on synthesizing conditional statements in the form of IFTTT recipes.

pdf bib
Guiding Interaction Behaviors for Multi-modal Grounded Language Learning
Jesse Thomason | Jivko Sinapov | Raymond Mooney
Proceedings of the First Workshop on Language Grounding for Robotics

Multi-modal grounded language learning connects language predicates to physical properties of objects in the world. Sensing with multiple modalities, such as audio, haptics, and visual colors and shapes while performing interaction behaviors like lifting, dropping, and looking on objects enables a robot to ground non-visual predicates like “empty” as well as visual predicates like “red”. Previous work has established that grounding in multi-modal space improves performance on object retrieval from human descriptions. In this work, we gather behavior annotations from humans and demonstrate that these improve language grounding performance by allowing a system to focus on relevant behaviors for words like “white” or “half-full” that can be understood by looking or lifting, respectively. We also explore adding modality annotations (whether to focus on audio or haptics when performing a behavior), which improves performance, and sharing information between linguistically related predicates (if “green” is a color, “white” is a color), which improves grounding recall but at the cost of precision.

pdf bib
Integrated Learning of Dialog Strategies and Semantic Parsing
Aishwarya Padmakumar | Jesse Thomason | Raymond J. Mooney
Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers

Natural language understanding and dialog management are two integral components of interactive dialog systems. Previous research has used machine learning techniques to individually optimize these components, with different forms of direct and indirect supervision. We present an approach to integrate the learning of both a dialog strategy using reinforcement learning, and a semantic parser for robust natural language understanding, using only natural dialog interaction for supervision. Experimental results on a simulated task of robot instruction demonstrate that joint learning of both components improves dialog performance over learning either of these components alone.

2016

pdf bib
Statistical Script Learning with Recurrent Neural Networks
Karl Pichotta | Raymond Mooney
Proceedings of the Workshop on Uphill Battles in Language Processing: Scaling Early Achievements to Robust Methods

pdf bib
Combining Supervised and Unsupervised Enembles for Knowledge Base Population
Nazneen Fatema Rajani | Raymond Mooney
Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing

pdf bib
Improving LSTM-based Video Description with Linguistic Knowledge Mined from Text
Subhashini Venugopalan | Lisa Anne Hendricks | Raymond Mooney | Kate Saenko
Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing

pdf bib
Using Sentence-Level LSTM Language Models for Script Inference
Karl Pichotta | Raymond J. Mooney
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

pdf bib
Representing Meaning with a Combination of Logical and Distributional Models
I. Beltagy | Stephen Roller | Pengxiang Cheng | Katrin Erk | Raymond J. Mooney
Computational Linguistics, Volume 42, Issue 4 - December 2016

2015

pdf bib
Stacked Ensembles of Information Extractors for Knowledge-Base Population
Vidhoon Viswanathan | Nazneen Fatema Rajani | Yinon Bentor | Raymond Mooney
Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)

pdf bib
Language to Code: Learning Semantic Parsers for If-This-Then-That Recipes
Chris Quirk | Raymond Mooney | Michel Galley
Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)

pdf bib
Translating Videos to Natural Language Using Deep Recurrent Neural Networks
Subhashini Venugopalan | Huijuan Xu | Jeff Donahue | Marcus Rohrbach | Raymond Mooney | Kate Saenko
Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies

2014

pdf bib
Semantic Parsing using Distributional Semantics and Probabilistic Logic
Islam Beltagy | Katrin Erk | Raymond Mooney
Proceedings of the ACL 2014 Workshop on Semantic Parsing

pdf bib
Probabilistic Soft Logic for Semantic Textual Similarity
Islam Beltagy | Katrin Erk | Raymond Mooney
Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

pdf bib
UTexas: Natural Language Semantics using Distributional Semantics and Probabilistic Logic
Islam Beltagy | Stephen Roller | Gemma Boleda | Katrin Erk | Raymond Mooney
Proceedings of the 8th International Workshop on Semantic Evaluation (SemEval 2014)

pdf bib
Integrating Language and Vision to Generate Natural Language Descriptions of Videos in the Wild
Jesse Thomason | Subhashini Venugopalan | Sergio Guadarrama | Kate Saenko | Raymond Mooney
Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers

pdf bib
Statistical Script Learning with Multi-Argument Events
Karl Pichotta | Raymond Mooney
Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics

2013

pdf bib
Detecting Promotional Content in Wikipedia
Shruti Bhosale | Heath Vinicombe | Raymond Mooney
Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing

pdf bib
Montague Meets Markov: Deep Semantics with Probabilistic Logical Form
Islam Beltagy | Cuong Chau | Gemma Boleda | Dan Garrette | Katrin Erk | Raymond Mooney
Second Joint Conference on Lexical and Computational Semantics (*SEM), Volume 1: Proceedings of the Main Conference and the Shared Task: Semantic Textual Similarity

pdf bib
Generating Natural-Language Video Descriptions Using Text-Mined Knowledge
Niveda Krishnamoorthy | Girish Malkarnenkar | Raymond Mooney | Kate Saenko | Sergio Guadarrama
Proceedings of the Workshop on Vision and Natural Language Processing

pdf bib
Adapting Discriminative Reranking to Grounded Language Learning
Joohyun Kim | Raymond Mooney
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

2012

pdf bib
Learning to “Read Between the Lines” using Bayesian Logic Programs
Sindhu Raghavan | Raymond Mooney | Hyeonseo Ku
Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

pdf bib
Unsupervised PCFG Induction for Grounded Language Learning with Highly Ambiguous Supervision
Joohyun Kim | Raymond Mooney
Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning

pdf bib
Learning Language from Perceptual Context
Raymond Mooney
Proceedings of the 13th Conference of the European Chapter of the Association for Computational Linguistics

2011

pdf bib
Cross-Cutting Models of Lexical Semantics
Joseph Reisinger | Raymond Mooney
Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing

pdf bib
Implementing Weighted Abduction in Markov Logic
James Blythe | Jerry Hobbs | Pedro Domingos | Rohit Kate | Raymond Mooney
Proceedings of the Ninth International Conference on Computational Semantics (IWCS 2011)

pdf bib
Integrating Logical Representations with Probabilistic Information using Markov Logic
Dan Garrette | Katrin Erk | Raymond Mooney
Proceedings of the Ninth International Conference on Computational Semantics (IWCS 2011)

2010

pdf bib
A Mixture Model with Sharing for Lexical Semantics
Joseph Reisinger | Raymond Mooney
Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing

pdf bib
Learning to Predict Readability using Diverse Linguistic Features
Rohit Kate | Xiaoqiang Luo | Siddharth Patwardhan | Martin Franz | Radu Florian | Raymond Mooney | Salim Roukos | Chris Welty
Proceedings of the 23rd International Conference on Computational Linguistics (Coling 2010)

pdf bib
Generative Alignment and Semantic Parsing for Learning from Ambiguous Supervision
Joohyun Kim | Raymond Mooney
Coling 2010: Posters

pdf bib
Multi-Prototype Vector-Space Models of Word Meaning
Joseph Reisinger | Raymond J. Mooney
Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics

pdf bib
Authorship Attribution Using Probabilistic Context-Free Grammars
Sindhu Raghavan | Adriana Kovashka | Raymond Mooney
Proceedings of the ACL 2010 Conference Short Papers

pdf bib
Joint Entity and Relation Extraction Using Card-Pyramid Parsing
Rohit J. Kate | Raymond Mooney
Proceedings of the Fourteenth Conference on Computational Natural Language Learning

2009

pdf bib
Learning a Compositional Semantic Parser using an Existing Syntactic Parser
Ruifang Ge | Raymond Mooney
Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP

2007

pdf bib
Learning Synchronous Grammars for Semantic Parsing with Lambda Calculus
Yuk Wah Wong | Raymond Mooney
Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics

pdf bib
Learning to Extract Relations from the Web using Minimal Supervision
Razvan Bunescu | Raymond Mooney
Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics

pdf bib
Generation by Inverting a Semantic Parser that Uses Statistical Machine Translation
Yuk Wah Wong | Raymond Mooney
Human Language Technologies 2007: The Conference of the North American Chapter of the Association for Computational Linguistics; Proceedings of the Main Conference

pdf bib
Semi-Supervised Learning for Semantic Parsing using Support Vector Machines
Rohit Kate | Raymond Mooney
Human Language Technologies 2007: The Conference of the North American Chapter of the Association for Computational Linguistics; Companion Volume, Short Papers

2006

pdf bib
Learning for Semantic Parsing with Statistical Machine Translation
Yuk Wah Wong | Raymond Mooney
Proceedings of the Human Language Technology Conference of the NAACL, Main Conference

pdf bib
Integrating Co-occurrence Statistics with Information Extraction for Robust Retrieval of Protein Interactions from Medline
Razvan Bunescu | Raymond Mooney | Arun Ramani | Edward Marcotte
Proceedings of the HLT-NAACL BioNLP Workshop on Linking Natural Language and Biology

pdf bib
Using String-Kernels for Learning Semantic Parsers
Rohit J. Kate | Raymond J. Mooney
Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics

pdf bib
Discriminative Reranking for Semantic Parsing
Ruifang Ge | Raymond J. Mooney
Proceedings of the COLING/ACL 2006 Main Conference Poster Sessions

2005

pdf bib
A Statistical Semantic Parser that Integrates Syntax and Semantics
Ruifang Ge | Raymond Mooney
Proceedings of the Ninth Conference on Computational Natural Language Learning (CoNLL-2005)

pdf bib
Using Biomedical Literature Mining to Consolidate the Set of Known Human Protein-Protein Interactions
Arun Ramani | Razvan Bunescu | Raymond Mooney | Edward Marcotte
Proceedings of the ACL-ISMB Workshop on Linking Biological Literature, Ontologies and Databases: Mining Biological Semantics

pdf bib
Proceedings of Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing
Raymond Mooney | Chris Brew | Lee-Feng Chien | Katrin Kirchhoff
Proceedings of Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing

pdf bib
A Shortest Path Dependency Kernel for Relation Extraction
Razvan Bunescu | Raymond Mooney
Proceedings of Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing

2004

pdf bib
Collective Information Extraction with Relational Markov Networks
Razvan Bunescu | Raymond Mooney
Proceedings of the 42nd Annual Meeting of the Association for Computational Linguistics (ACL-04)

2000

pdf bib
Automated Construction of Database Interfaces: Intergrating Statistical and Relational Learning for Semantic Parsing
Lappoon R. Tang | Raymond J. Mooney
2000 Joint SIGDAT Conference on Empirical Methods in Natural Language Processing and Very Large Corpora

1998

pdf bib
Semantic Lexicon Acquisition for Learning Natural Language Interfaces
Cynthia A. Thompson | Raymond J. Mooney
Sixth Workshop on Very Large Corpora

1997

pdf bib
Relational Learning of Pattern-Match Rules for Information Extraction
Mary Elaine Califf | Raymond J. Mooney
CoNLL97: Computational Natural Language Learning

pdf bib
Learning Parse and Translation Decisions from Examples with Rich Context
Ulf Hermjakob | Raymond J. Mooney
35th Annual Meeting of the Association for Computational Linguistics and 8th Conference of the European Chapter of the Association for Computational Linguistics

1996

pdf bib
Comparative Experiments on Disambiguating Word Senses: An Illustration of the Role of Bias in Machine Learning
Raymond J. Mooney
Conference on Empirical Methods in Natural Language Processing