Mukund Sridhar


2022

pdf
Instilling Type Knowledge in Language Models via Multi-Task QA
Shuyang Li | Mukund Sridhar | Chandana Satya Prakash | Jin Cao | Wael Hamza | Julian McAuley
Findings of the Association for Computational Linguistics: NAACL 2022

Understanding human language often necessitates understanding entities and their place in a taxonomy of knowledge—their types.Previous methods to learn entity types rely on training classifiers on datasets with coarse, noisy, and incomplete labels. We introduce a method to instill fine-grained type knowledge in language models with text-to-text pre-training on type-centric questions leveraging knowledge base documents and knowledge graphs.We create the WikiWiki dataset: entities and passages from 10M Wikipedia articles linked to the Wikidata knowledge graph with 41K types.Models trained on WikiWiki achieve state-of-the-art performance in zero-shot dialog state tracking benchmarks, accurately infer entity types in Wikipedia articles, and can discover new types deemed useful by human judges.

2021

pdf
Zero-shot Generalization in Dialog State Tracking through Generative Question Answering
Shuyang Li | Jin Cao | Mukund Sridhar | Henghui Zhu | Shang-Wen Li | Wael Hamza | Julian McAuley
Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume

Dialog State Tracking (DST), an integral part of modern dialog systems, aims to track user preferences and constraints (slots) in task-oriented dialogs. In real-world settings with constantly changing services, DST systems must generalize to new domains and unseen slot types. Existing methods for DST do not generalize well to new slot names and many require known ontologies of slot types and values for inference. We introduce a novel ontology-free framework that supports natural language queries for unseen constraints and slots in multi-domain task-oriented dialogs. Our approach is based on generative question-answering using a conditional language model pre-trained on substantive English sentences. Our model improves joint goal accuracy in zero-shot domain adaptation settings by up to 9% (absolute) over the previous state-of-the-art on the MultiWOZ 2.1 dataset.

pdf
Towards Realistic Single-Task Continuous Learning Research for NER
Justin Payan | Yuval Merhav | He Xie | Satyapriya Krishna | Anil Ramakrishna | Mukund Sridhar | Rahul Gupta
Findings of the Association for Computational Linguistics: EMNLP 2021

There is an increasing interest in continuous learning (CL), as data privacy is becoming a priority for real-world machine learning applications. Meanwhile, there is still a lack of academic NLP benchmarks that are applicable for realistic CL settings, which is a major challenge for the advancement of the field. In this paper we discuss some of the unrealistic data characteristics of public datasets, study the challenges of realistic single-task continuous learning as well as the effectiveness of data rehearsal as a way to mitigate accuracy loss. We construct a CL NER dataset from an existing publicly available dataset and release it along with the code to the research community.

pdf
Knowledge Informed Semantic Parsing for Conversational Question Answering
Raghuveer Thirukovalluru | Mukund Sridhar | Dung Thai | Shruti Chanumolu | Nicholas Monath | Sankaranarayanan Ananthakrishnan | Andrew McCallum
Proceedings of the 6th Workshop on Representation Learning for NLP (RepL4NLP-2021)

Smart assistants are tasked to answer various questions regarding world knowledge. These questions range from retrieval of simple facts to retrieval of complex, multi-hops question followed by various operators (i.e., filter, argmax). Semantic parsing has emerged as the state-of-the-art for answering these kinds of questions by forming queries to extract information from knowledge bases (KBs). Specially, neural semantic parsers (NSPs) effectively translate natural questions to logical forms, which execute on KB and give desirable answers. Yet, NSPs suffer from non-executable logical forms for some instances in the generated logical forms might be missing due to the incompleteness of KBs. Intuitively, knowing the KB structure informs NSP with changes of the global logical forms structures with respect to changes in KB instances. In this work, we propose a novel knowledge-informed decoder variant of NSP. We consider the conversational question answering settings, where a natural language query, its context and its final answers are available at training. Experimental results show that our method outperformed strong baselines by 1.8 F1 points overall across 10 types of questions of the CSQA dataset. Especially for the “Logical Reasoning” category, our model improves by 7 F1 points. Furthermore, our results are achieved with 90.3% fewer parameters, allowing faster training for large-scale datasets.