Deepak Ramachandran


2021

pdf
Which Linguist Invented the Lightbulb? Presupposition Verification for Question-Answering
Najoung Kim | Ellie Pavlick | Burcu Karagol Ayan | Deepak Ramachandran
Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)

Many Question-Answering (QA) datasets contain unanswerable questions, but their treatment in QA systems remains primitive. Our analysis of the Natural Questions (Kwiatkowski et al. 2019) dataset reveals that a substantial portion of unanswerable questions (~21%) can be explained based on the presence of unverifiable presuppositions. Through a user preference study, we demonstrate that the oracle behavior of our proposed system—which provides responses based on presupposition failure—is preferred over the oracle behavior of existing QA systems. Then, we present a novel framework for implementing such a system in three steps: presupposition generation, presupposition verification, and explanation generation, reporting progress on each. Finally, we show that a simple modification of adding presuppositions and their verifiability to the input of a competitive end-to-end QA system yields modest gains in QA performance and unanswerability detection, demonstrating the promise of our approach.

2020

pdf
Do Language Embeddings capture Scales?
Xikun Zhang | Deepak Ramachandran | Ian Tenney | Yanai Elazar | Dan Roth
Proceedings of the Third BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP

Pretrained Language Models (LMs) have been shown to possess significant linguistic, common sense and factual knowledge. One form of knowledge that has not been studied yet in this context is information about the scalar magnitudes of objects. We show that pretrained language models capture a significant amount of this information but are short of the capability required for general common-sense reasoning. We identify contextual information in pre-training and numeracy as two key factors affecting their performance, and show that a simple method of canonicalizing numbers can have a significant effect on the results.

pdf
Do Language Embeddings capture Scales?
Xikun Zhang | Deepak Ramachandran | Ian Tenney | Yanai Elazar | Dan Roth
Findings of the Association for Computational Linguistics: EMNLP 2020

Pretrained Language Models (LMs) have been shown to possess significant linguistic, common sense and factual knowledge. One form of knowledge that has not been studied yet in this context is information about the scalar magnitudes of objects. We show that pretrained language models capture a significant amount of this information but are short of the capability required for general common-sense reasoning. We identify contextual information in pre-training and numeracy as two key factors affecting their performance, and show that a simple method of canonicalizing numbers can have a significant effect on the results.

2019

pdf
How Large Are Lions? Inducing Distributions over Quantitative Attributes
Yanai Elazar | Abhijit Mahabal | Deepak Ramachandran | Tania Bedrax-Weiss | Dan Roth
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics

Most current NLP systems have little knowledge about quantitative attributes of objects and events. We propose an unsupervised method for collecting quantitative information from large amounts of web data, and use it to create a new, very large resource consisting of distributions over physical quantities associated with objects, adjectives, and verbs which we call Distributions over Quantitative (DoQ). This contrasts with recent work in this area which has focused on making only relative comparisons such as “Is a lion bigger than a wolf?”. Our evaluation shows that DoQ compares favorably with state of the art results on existing datasets for relative comparisons of nouns and adjectives, and on a new dataset we introduce.

2015

pdf
Belief Tracking with Stacked Relational Trees
Deepak Ramachandran | Adwait Ratnaparkhi
Proceedings of the 16th Annual Meeting of the Special Interest Group on Discourse and Dialogue

pdf
A TV Program Discovery Dialog System using recommendations
Deepak Ramachandran | Mark Fanty | Ronald Provine | Peter Yeh | William Jarrold | Adwait Ratnaparkhi | Benjamin Douglas
Proceedings of the 16th Annual Meeting of the Special Interest Group on Discourse and Dialogue

2013

pdf
The Dialog State Tracking Challenge
Jason Williams | Antoine Raux | Deepak Ramachandran | Alan Black
Proceedings of the SIGDIAL 2013 Conference

2012

pdf
Landmark-Based Location Belief Tracking in a Spoken Dialog System
Yi Ma | Antoine Raux | Deepak Ramachandran | Rakesh Gupta
Proceedings of the 13th Annual Meeting of the Special Interest Group on Discourse and Dialogue

2010

pdf
Probabilistic Ontology Trees for Belief Tracking in Dialog Systems
Neville Mehta | Rakesh Gupta | Antoine Raux | Deepak Ramachandran | Stefan Krawczyk
Proceedings of the SIGDIAL 2010 Conference