Ganesh Ramakrishnan


2021

pdf bib
Joint Learning of Hyperbolic Label Embeddings for Hierarchical Multi-label Classification
Soumya Chatterjee | Ayush Maheshwari | Ganesh Ramakrishnan | Saketha Nath Jagarlapudi
Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume

We consider the problem of multi-label classification, where the labels lie on a hierarchy. However, unlike most existing works in hierarchical multi-label classification, we do not assume that the label-hierarchy is known. Encouraged by the recent success of hyperbolic embeddings in capturing hierarchical relations, we propose to jointly learn the classifier parameters as well as the label embeddings. Such a joint learning is expected to provide a twofold advantage: i) the classifier generalises better as it leverages the prior knowledge of existence of a hierarchy over the labels, and ii) in addition to the label co-occurrence information, the label-embedding may benefit from the manifold structure of the input datapoints, leading to embeddings that are more faithful to the label hierarchy. We propose a novel formulation for the joint learning and empirically evaluate its efficacy. The results show that the joint learning improves over the baseline that employs label co-occurrence based pre-trained hyperbolic embeddings. Moreover, the proposed classifiers achieve state-of-the-art generalization on standard benchmarks. We also present evaluation of the hyperbolic embeddings obtained by joint learning and show that they represent the hierarchy more accurately than the other alternatives.

pdf bib
Meta-Learning for Effective Multi-task and Multilingual Modelling
Ishan Tarunesh | Sushil Khyalia | Vishwajeet Kumar | Ganesh Ramakrishnan | Preethi Jyothi
Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume

Natural language processing (NLP) tasks (e.g. question-answering in English) benefit from knowledge of other tasks (e.g., named entity recognition in English) and knowledge of other languages (e.g., question-answering in Spanish). Such shared representations are typically learned in isolation, either across tasks or across languages. In this work, we propose a meta-learning approach to learn the interactions between both tasks and languages. We also investigate the role of different sampling strategies used during meta-learning. We present experiments on five different tasks and six different languages from the XTREME multilingual benchmark dataset. Our meta-learned model clearly improves in performance compared to competitive baseline models that also include multi-task baselines. We also present zero-shot evaluations on unseen target languages to demonstrate the utility of our proposed model.

pdf bib
Semi-Supervised Data Programming with Subset Selection
Ayush Maheshwari | Oishik Chatterjee | Krishnateja Killamsetty | Ganesh Ramakrishnan | Rishabh Iyer
Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021

pdf bib
Rule Augmented Unsupervised Constituency Parsing
Atul Sahay | Anshul Nasery | Ayush Maheshwari | Ganesh Ramakrishnan | Rishabh Iyer
Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021

pdf bib
Automatic Speech Recognition in Sanskrit: A New Speech Corpus and Modelling Insights
Devaraja Adiga | Rishabh Kumar | Amrith Krishna | Preethi Jyothi | Ganesh Ramakrishnan | Pawan Goyal
Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021

2020

pdf bib
Vocabulary Matters: A Simple yet Effective Approach to Paragraph-level Question Generation
Vishwajeet Kumar | Manish Joshi | Ganesh Ramakrishnan | Yuan-Fang Li
Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing

Question generation (QG) has recently attracted considerable attention. Most of the current neural models take as input only one or two sentences, and perform poorly when multiple sentences or complete paragraphs are given as input. However, in real-world scenarios it is very important to be able to generate high-quality questions from complete paragraphs. In this paper, we present a simple yet effective technique for answer-aware question generation from paragraphs. We augment a basic sequence-to-sequence QG model with dynamic, paragraph-specific dictionary and copy attention that is persistent across the corpus, without requiring features generated by sophisticated NLP pipelines or handcrafted rules. Our evaluation on SQuAD shows that our model significantly outperforms current state-of-the-art systems in question generation from paragraphs in both automatic and human evaluation. We achieve a 6-point improvement over the best system on BLEU-4, from 16.38 to 22.62.

2019

pdf bib
Cross-Lingual Training for Automatic Question Generation
Vishwajeet Kumar | Nitish Joshi | Arijit Mukherjee | Ganesh Ramakrishnan | Preethi Jyothi
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics

Automatic question generation (QG) is a challenging problem in natural language understanding. QG systems are typically built assuming access to a large number of training instances where each instance is a question and its corresponding answer. For a new language, such training instances are hard to obtain making the QG problem even more challenging. Using this as our motivation, we study the reuse of an available large QG dataset in a secondary language (e.g. English) to learn a QG model for a primary language (e.g. Hindi) of interest. For the primary language, we assume access to a large amount of monolingual text but only a small QG dataset. We propose a cross-lingual QG model which uses the following training regime: (i) Unsupervised pretraining of language models in both primary and secondary languages and (ii) joint supervised training for QG in both languages. We demonstrate the efficacy of our proposed approach using two different primary languages, Hindi and Chinese. Our proposed framework clearly outperforms a number of baseline models, including a fully-supervised transformer-based model trained on the QG datasets in the primary language. We also create and release a new question answering dataset for Hindi consisting of 6555 sentences.

pdf bib
Putting the Horse before the Cart: A Generator-Evaluator Framework for Question Generation from Text
Vishwajeet Kumar | Ganesh Ramakrishnan | Yuan-Fang Li
Proceedings of the 23rd Conference on Computational Natural Language Learning (CoNLL)

Automatic question generation (QG) is a useful yet challenging task in NLP. Recent neural network-based approaches represent the state-of-the-art in this task. In this work, we attempt to strengthen them significantly by adopting a holistic and novel generator-evaluator framework that directly optimizes objectives that reward semantics and structure. The generator is a sequence-to-sequence model that incorporates the structure and semantics of the question being generated. The generator predicts an answer in the passage that the question can pivot on. Employing the copy and coverage mechanisms, it also acknowledges other contextually important (and possibly rare) keywords in the passage that the question needs to conform to, while not redundantly repeating words. The evaluator model evaluates and assigns a reward to each predicted question based on its conformity to the structure of ground-truth questions. We propose two novel QG-specific reward functions for text conformity and answer conformity of the generated question. The evaluator also employs structure-sensitive rewards based on evaluation measures such as BLEU, GLEU, and ROUGE-L, which are suitable for QG. In contrast, most of the previous works only optimize the cross-entropy loss, which can induce inconsistencies between training (objective) and testing (evaluation) measures. Our evaluation shows that our approach significantly outperforms state-of-the-art systems on the widely-used SQuAD benchmark as per both automatic and human evaluation.

pdf bib
ParaQG: A System for Generating Questions and Answers from Paragraphs
Vishwajeet Kumar | Sivaanandh Muneeswaran | Ganesh Ramakrishnan | Yuan-Fang Li
Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP): System Demonstrations

Generating syntactically and semantically valid and relevant questions from paragraphs is useful with many applications. Manual generation is a labour-intensive task, as it requires the reading, parsing and understanding of long passages of text. A number of question generation models based on sequence-to-sequence techniques have recently been proposed. Most of them generate questions from sentences only, and none of them is publicly available as an easy-to-use service. In this paper, we demonstrate ParaQG, a Web-based system for generating questions from sentences and paragraphs. ParaQG incorporates a number of novel functionalities to make the question generation process user-friendly. It provides an interactive interface for a user to select answers with visual insights on generation of questions. It also employs various faceted views to group similar questions as well as filtering techniques to eliminate unanswerable questions.

2018

pdf bib
Entity Resolution and Location Disambiguation in the Ancient Hindu Temples Domain using Web Data
Ayush Maheshwari | Vishwajeet Kumar | Ganesh Ramakrishnan | J. Saketha Nath
Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Demonstrations

We present a system for resolving entities and disambiguating locations based on publicly available web data in the domain of ancient Hindu Temples. Scarce, unstructured information poses a challenge to Entity Resolution(ER) and snippet ranking. Additionally, because the same set of entities may be associated with multiple locations, Location Disambiguation(LD) is a problem. The mentions and descriptions of temples exist in the order of hundreds of thousands, with such data generated by various users in various forms such as text (Wikipedia pages), videos (YouTube videos), blogs, etc. We demonstrate an integrated approach using a combination of grammar rules for parsing and unsupervised (clustering) algorithms to resolve entity and locations with high confidence. A demo of our system is accessible at tinyurl.com/templedemos. Our system is open source and available on GitHub.

2015

pdf bib
An Approach to Collective Entity Linking
Ashish Kulkarni | Kanika Agarwal | Pararth Shah | Sunny Raj Rathod | Ganesh Ramakrishnan
Proceedings of the 12th International Conference on Natural Language Processing

pdf bib
Summarization of Multi-Document Topic Hierarchies using Submodular Mixtures
Ramakrishna Bairi | Rishabh Iyer | Ganesh Ramakrishnan | Jeff Bilmes
Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)

pdf bib
A machine-assisted human translation system for technical documents
Vishwajeet Kumar | Ashish Kulkarni | Pankaj Singh | Ganesh Ramakrishnan | Ganesh Arnaal
Proceedings of Machine Translation Summit XV: User Track

pdf bib
Optimizing Multivariate Performance Measures for Learning Relation Extraction Models
Gholamreza Haffari | Ajay Nagesh | Ganesh Ramakrishnan
Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies

2014

pdf bib
Efficient Reuse of Structured and Unstructured Resources for Ontology Population
Chetana Gavankar | Ashish Kulkarni | Ganesh Ramakrishnan
Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14)

We study the problem of ontology population for a domain ontology and present solutions based on semi-automatic techniques. A domain ontology for an organization, often consists of classes whose instances are either specific to, or independent of the organization. E.g. in an academic domain ontology, classes like Professor, Department could be organization (university) specific, while Conference, Programming languages are organization independent. This distinction allows us to leverage data sources both―within the organization and those in the Internet ― to extract entities and populate an ontology. We propose techniques that build on those for open domain IE. Together with user input, we show through comprehensive evaluation, how these semi-automatic techniques achieve high precision. We experimented with the academic domain and built an ontology comprising of over 220 classes. Intranet documents from five universities formed our organization specific corpora and we used open domain knowledge bases like Wikipedia, Linked Open Data, and web pages from the Internet as the organization independent data sources. The populated ontology that we built for one of the universities comprised of over 75,000 instances. We adhere to the semantic web standards and tools and make the resources available in the OWL format. These could be useful for applications such as information extraction, text annotation, and information retrieval.

pdf bib
Noisy Or-based model for Relation Extraction using Distant Supervision
Ajay Nagesh | Gholamreza Haffari | Ganesh Ramakrishnan
Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)

2013

pdf bib
SATTY : Word Sense Induction Application in Web Search Clustering
Satyabrata Behera | Upasana Gaikwad | Ramakrishna Bairi | Ganesh Ramakrishnan
Second Joint Conference on Lexical and Computational Semantics (*SEM), Volume 2: Proceedings of the Seventh International Workshop on Semantic Evaluation (SemEval 2013)

pdf bib
Learning to Generate Diversified Query Interpretations using Biconvex Optimization
Ramakrishna Bairi | Ambha A | Ganesh Ramakrishnan
Proceedings of the Sixth International Joint Conference on Natural Language Processing

pdf bib
Structure Cognizant Pseudo Relevance Feedback
Arjun Atreya V | Yogesh Kakde | Pushpak Bhattacharyya | Ganesh Ramakrishnan
Proceedings of the Sixth International Joint Conference on Natural Language Processing

2012

pdf bib
Towards Efficient Named-Entity Rule Induction for Customizability
Ajay Nagesh | Ganesh Ramakrishnan | Laura Chiticariu | Rajasekar Krishnamurthy | Ankush Dharkar | Pushpak Bhattacharyya
Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning

pdf bib
Building Multilingual Search Index using open source framework
Arjun Atreya | Swapnil Chaudhari | Pushpak Bhattacharyya | Ganesh Ramakrishnan
Proceedings of the 3rd Workshop on South and Southeast Asian Natural Language Processing

pdf bib
Error tracking in search engine development
Swapnil Chaudhari | Arjun Atreya V | Pushpak Bhattacharyya | Ganesh Ramakrishnan
Proceedings of the 3rd Workshop on South and Southeast Asian Natural Language Processing

pdf bib
Proceedings of the Workshop on Information Extraction and Entity Analytics on Social Media Data
Sriram Raghavan | Ganesh Ramakrishnan
Proceedings of the Workshop on Information Extraction and Entity Analytics on Social Media Data

pdf bib
Effective Mentor Suggestion System for Online Collaboration Platform
Advait Raut | Upasana Gaikwad | Ramakrishna Bairi | Ganesh Ramakrishnan
Proceedings of the Workshop on Speech and Language Processing Tools in Education

pdf bib
Enriching An Academic knowledge base using Linked Open Data
Chetana Gavankar | Ashish Kulkarni | Yuan Fang Li | Ganesh Ramakrishnan
Proceedings of the Workshop on Speech and Language Processing Tools in Education

pdf bib
Content Bookmarking and Recommendation
Ananth Vyasarayamut | Satyabrata Behera | Ganesh Ramakrishnan
Proceedings of the Workshop on Speech and Language Processing Tools in Education

pdf bib
Proceedings of the Workshop on Question Answering for Complex Domains
Nanda Kambhatla | Sachindra Joshi | Ganesh Ramakrishnan | Kiran Kate | Priyanka Agrawal
Proceedings of the Workshop on Question Answering for Complex Domains

2008

pdf bib
Learning Decision Lists with Known Rules for Text Mining
Venkatesan Chakravarthy | Sachindra Joshi | Ganesh Ramakrishnan | Shantanu Godbole | Sreeram Balakrishnan
Proceedings of the Third International Joint Conference on Natural Language Processing: Volume-II

2007

pdf bib
USP-IBM-1 and USP-IBM-2: The ILP-based Systems for Lexical Sample WSD in SemEval-2007
Lucia Specia | Maria das Graças | Volpe Nunes | Ashwin Srinivasan | Ganesh Ramakrishnan
Proceedings of the Fourth International Workshop on Semantic Evaluations (SemEval-2007)

2006

pdf bib
Entity Annotation based on Inverse Index Operations
Ganesh Ramakrishnan | Sreeram Balakrishnan | Sachindra Joshi
Proceedings of the 2006 Conference on Empirical Methods in Natural Language Processing

2004

pdf bib
A gloss-centered algorithm for disambiguation
Ganesh Ramakrishnan | B. Prithviraj | Pushpak Bhattacharyya
Proceedings of SENSEVAL-3, the Third International Workshop on the Evaluation of Systems for the Semantic Analysis of Text

pdf bib
Generic Text Summarization Using WordNet
Kedar Bellare | Anish Das Sarma | Atish Das Sarma | Navneet Loiwal | Vaibhav Mehta | Ganesh Ramakrishnan | Pushpak Bhattacharyya
Proceedings of the Fourth International Conference on Language Resources and Evaluation (LREC’04)

2003

pdf bib
Question Answering via Bayesian Inference on Lexical Relations
Ganesh Ramakrishnan | Apurva Jadhav | Ashutosh Joshi | Soumen Chakrabarti | Pushpak Bhattacharyya
Proceedings of the ACL 2003 Workshop on Multilingual Summarization and Question Answering