2024
pdf
abs
AnyMAL: An Efficient and Scalable Any-Modality Augmented Language Model
Seungwhan Moon
|
Andrea Madotto
|
Zhaojiang Lin
|
Tushar Nagarajan
|
Matt Smith
|
Shashank Jain
|
Chun-Fu Yeh
|
Prakash Murugesan
|
Peyman Heidari
|
Yue Liu
|
Kavya Srinet
|
Babak Damavandi
|
Anuj Kumar
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing: Industry Track
We present Any-Modality Augmented Language Model (AnyMAL), a unified model that reasons over diverse input modality signals (i.e. text, image, video, audio, IMU motion sensor), and generates textual responses. AnyMAL inherits the powerful text-based reasoning abilities of the state-of-the-art LLMs including Llama-3 (70B), and converts modality-specific signals to the joint textual space through a pre-trained aligner module.In this paper, we provide details on the optimizations implemented to efficiently scale the training pipeline, and present a comprehensive recipe for model and training configurations. We conduct comprehensive empirical analysis comprising both human and automatic evaluations, and demonstrate state-of-the-art performance on various multimodal tasks compared to industry-leading models – albeit with a relatively small number of trainable parameters.
2021
pdf
abs
El Volumen Louder Por Favor: Code-switching in Task-oriented Semantic Parsing
Arash Einolghozati
|
Abhinav Arora
|
Lorena Sainz-Maza Lecanda
|
Anuj Kumar
|
Sonal Gupta
Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume
Being able to parse code-switched (CS) utterances, such as Spanish+English or Hindi+English, is essential to democratize task-oriented semantic parsing systems for certain locales. In this work, we focus on Spanglish (Spanish+English) and release a dataset, CSTOP, containing 5800 CS utterances alongside their semantic parses. We examine the CS generalizability of various Cross-lingual (XL) models and exhibit the advantage of pre-trained XL language models when data for only one language is present. As such, we focus on improving the pre-trained models for the case when only English corpus alongside either zero or a few CS training instances are available. We propose two data augmentation methods for the zero-shot and the few-shot settings: fine-tune using translate-and-align and augment using a generation model followed by match-and-filter. Combining the few-shot setting with the above improvements decreases the initial 30-point accuracy gap between the zero-shot and the full-data settings by two thirds.
pdf
abs
Building Adaptive Acceptability Classifiers for Neural NLG
Soumya Batra
|
Shashank Jain
|
Peyman Heidari
|
Ankit Arun
|
Catharine Youngs
|
Xintong Li
|
Pinar Donmez
|
Shawn Mei
|
Shiunzu Kuo
|
Vikas Bhardwaj
|
Anuj Kumar
|
Michael White
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing
We propose a novel framework to train models to classify acceptability of responses generated by natural language generation (NLG) models, improving upon existing sentence transformation and model-based approaches. An NLG response is considered acceptable if it is both semantically correct and grammatical. We don’t make use of any human references making the classifiers suitable for runtime deployment. Training data for the classifiers is obtained using a 2-stage approach of first generating synthetic data using a combination of existing and new model-based approaches followed by a novel validation framework to filter and sort the synthetic data into acceptable and unacceptable classes. Our 2-stage approach adapts to a wide range of data representations and does not require additional data beyond what the NLG models are trained on. It is also independent of the underlying NLG model architecture, and is able to generate more realistic samples close to the distribution of the NLG model-generated responses. We present results on 5 datasets (WebNLG, Cleaned E2E, ViGGO, Alarm, and Weather) with varying data representations. We compare our framework with existing techniques that involve synthetic data generation using simple sentence transformations and/or model-based techniques, and show that building acceptability classifiers using data that resembles the generation model outputs followed by a validation framework outperforms the existing techniques, achieving state-of-the-art results. We also show that our techniques can be used in few-shot settings using self-training.
pdf
abs
Getting to Production with Few-shot Natural Language Generation Models
Peyman Heidari
|
Arash Einolghozati
|
Shashank Jain
|
Soumya Batra
|
Lee Callender
|
Ankit Arun
|
Shawn Mei
|
Sonal Gupta
|
Pinar Donmez
|
Vikas Bhardwaj
|
Anuj Kumar
|
Michael White
Proceedings of the 22nd Annual Meeting of the Special Interest Group on Discourse and Dialogue
In this paper, we study the utilization of pre-trained language models to enable few-shotNatural Language Generation (NLG) in task-oriented dialog systems. We introduce a system consisting of iterative self-training and an extensible mini-template framework that textualizes the structured input data into semi-natural text to fully take advantage of pre-trained language models. We compare var-ious representations of NLG models’ input and output and show that transforming the input and output to be similar to what the language model has seen before during pre-training improves the model’s few-shot performance substantially. We show that neural mod-els can be trained with as few as 300 annotated examples while providing high fidelity, considerably lowering the resource requirements for standing up a new domain or language. This level of data efficiency removes the need for crowd-sourced data collection resulting in higher quality data annotated by expert linguists. In addition, model maintenance and debugging processes will improve in this few-shot setting. Finally, we explore distillation and using a caching system to satisfy latency requirements of real-world systems.
2020
pdf
abs
Best Practices for Data-Efficient Modeling in NLG:How to Train Production-Ready Neural Models with Less Data
Ankit Arun
|
Soumya Batra
|
Vikas Bhardwaj
|
Ashwini Challa
|
Pinar Donmez
|
Peyman Heidari
|
Hakan Inan
|
Shashank Jain
|
Anuj Kumar
|
Shawn Mei
|
Karthik Mohan
|
Michael White
Proceedings of the 28th International Conference on Computational Linguistics: Industry Track
Natural language generation (NLG) is a critical component in conversational systems, owing to its role of formulating a correct and natural text response. Traditionally, NLG components have been deployed using template-based solutions. Although neural network solutions recently developed in the research community have been shown to provide several benefits, deployment of such model-based solutions has been challenging due to high latency, correctness issues, and high data needs. In this paper, we present approaches that have helped us deploy data-efficient neural solutions for NLG in conversational systems to production. We describe a family of sampling and modeling techniques to attain production quality with light-weight neural network models using only a fraction of the data that would be necessary otherwise, and show a thorough comparison between each. Our results show that domain complexity dictates the appropriate approach to achieve high data efficiency. Finally, we distill the lessons from our experimental findings into a list of best practices for production-level NLG model development, and present them in a brief runbook. Importantly, the end products of all of the techniques are small sequence-to-sequence models (~2Mb) that we can reliably deploy in production. These models achieve the same quality as large pretrained models (~1Gb) as judged by human raters.
pdf
abs
Conversational Semantic Parsing
Armen Aghajanyan
|
Jean Maillard
|
Akshat Shrivastava
|
Keith Diedrick
|
Michael Haeger
|
Haoran Li
|
Yashar Mehdad
|
Veselin Stoyanov
|
Anuj Kumar
|
Mike Lewis
|
Sonal Gupta
Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)
The structured representation for semantic parsing in task-oriented assistant systems is geared towards simple understanding of one-turn queries. Due to the limitations of the representation, the session-based properties such as co-reference resolution and context carryover are processed downstream in a pipelined system. In this paper, we propose a semantic representation for such task-oriented conversational systems that can represent concepts such as co-reference and context carryover, enabling comprehensive understanding of queries in a session. We release a new session-based, compositional task-oriented parsing dataset of 20k sessions consisting of 60k utterances. Unlike Dialog State Tracking Challenges, the queries in the dataset have compositional forms. We propose a new family of Seq2Seq models for the session-based parsing above, which also set state-of-the-art in ATIS, SNIPS, TOP and DSTC2. Notably, we improve the best known results on DSTC2 by up to 5 points for slot-carryover.
pdf
bib
Proceedings of the 2nd Workshop on Natural Language Processing for Conversational AI
Tsung-Hsien Wen
|
Asli Celikyilmaz
|
Zhou Yu
|
Alexandros Papangelis
|
Mihail Eric
|
Anuj Kumar
|
Iñigo Casanueva
|
Rushin Shah
Proceedings of the 2nd Workshop on Natural Language Processing for Conversational AI
2019
pdf
abs
Memory Grounded Conversational Reasoning
Seungwhan Moon
|
Pararth Shah
|
Rajen Subba
|
Anuj Kumar
Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP): System Demonstrations
We demonstrate a conversational system which engages the user through a multi-modal, multi-turn dialog over the user’s memories. The system can perform QA over memories by responding to user queries to recall specific attributes and associated media (e.g. photos) of past episodic memories. The system can also make proactive suggestions to surface related events or facts from past memories to make conversations more engaging and natural. To implement such a system, we collect a new corpus of memory grounded conversations, which comprises human-to-human role-playing dialogs given synthetic memory graphs with simulated attributes. Our proof-of-concept system operates on these synthetic memory graphs, however it can be trained and applied to real-world user memory data (e.g. photo albums, etc.) We present the architecture of the proposed conversational system, and example queries that the system supports.
pdf
abs
Memory Graph Networks for Explainable Memory-grounded Question Answering
Seungwhan Moon
|
Pararth Shah
|
Anuj Kumar
|
Rajen Subba
Proceedings of the 23rd Conference on Computational Natural Language Learning (CoNLL)
We introduce Episodic Memory QA, the task of answering personal user questions grounded on memory graph (MG), where episodic memories and related entity nodes are connected via relational edges. We create a new benchmark dataset first by generating synthetic memory graphs with simulated attributes, and by composing 100K QA pairs for the generated MG with bootstrapped scripts. To address the unique challenges for the proposed task, we propose Memory Graph Networks (MGN), a novel extension of memory networks to enable dynamic expansion of memory slots through graph traversals, thus able to answer queries in which contexts from multiple linked episodes and external knowledge are required. We then propose the Episodic Memory QA Net with multiple module networks to effectively handle various question types. Empirical results show improvement over the QA baselines in top-k answer prediction accuracy in the proposed task. The proposed model also generates a graph walk path and attention vectors for each predicted answer, providing a natural way to explain its QA reasoning.
pdf
abs
OpenDialKG: Explainable Conversational Reasoning with Attention-based Walks over Knowledge Graphs
Seungwhan Moon
|
Pararth Shah
|
Anuj Kumar
|
Rajen Subba
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics
We study a conversational reasoning model that strategically traverses through a large-scale common fact knowledge graph (KG) to introduce engaging and contextually diverse entities and attributes. For this study, we collect a new Open-ended Dialog <-> KG parallel corpus called OpenDialKG, where each utterance from 15K human-to-human role-playing dialogs is manually annotated with ground-truth reference to corresponding entities and paths from a large-scale KG with 1M+ facts. We then propose the DialKG Walker model that learns the symbolic transitions of dialog contexts as structured traversals over KG, and predicts natural entities to introduce given previous dialog contexts via a novel domain-agnostic, attention-based graph path decoder. Automatic and human evaluations show that our model can retrieve more natural and human-like responses than the state-of-the-art baselines or rule-based models, in both in-domain and cross-domain tasks. The proposed model also generates a KG walk path for each entity retrieved, providing a natural way to explain conversational reasoning.
pdf
bib
Proceedings of the First Workshop on NLP for Conversational AI
Yun-Nung Chen
|
Tania Bedrax-Weiss
|
Dilek Hakkani-Tur
|
Anuj Kumar
|
Mike Lewis
|
Thang-Minh Luong
|
Pei-Hao Su
|
Tsung-Hsien Wen
Proceedings of the First Workshop on NLP for Conversational AI
pdf
abs
A Tree-to-Sequence Model for Neural NLG in Task-Oriented Dialog
Jinfeng Rao
|
Kartikeya Upasani
|
Anusha Balakrishnan
|
Michael White
|
Anuj Kumar
|
Rajen Subba
Proceedings of the 12th International Conference on Natural Language Generation
Generating fluent natural language responses from structured semantic representations is a critical step in task-oriented conversational systems. Sequence-to-sequence models on flat meaning representations (MR) have been dominant in this task, for example in the E2E NLG Challenge. Previous work has shown that a tree-structured MR can improve the model for better discourse-level structuring and sentence-level planning. In this work, we propose a tree-to-sequence model that uses a tree-LSTM encoder to leverage the tree structures in the input MR, and further enhance the decoding by a structure-enhanced attention mechanism. In addition, we explore combining these enhancements with constrained decoding to improve semantic correctness. Our experiments not only show significant improvements over standard seq2seq baselines, but also is more data-efficient and generalizes better to hard scenarios.
2018
pdf
abs
Semantic Parsing for Task Oriented Dialog using Hierarchical Representations
Sonal Gupta
|
Rushin Shah
|
Mrinal Mohit
|
Anuj Kumar
|
Mike Lewis
Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing
Task oriented dialog systems typically first parse user utterances to semantic frames comprised of intents and slots. Previous work on task oriented intent and slot-filling work has been restricted to one intent per query and one slot label per token, and thus cannot model complex compositional requests. Alternative semantic parsing systems have represented queries as logical forms, but these are challenging to annotate and parse. We propose a hierarchical annotation scheme for semantic parsing that allows the representation of compositional queries, and can be efficiently and accurately parsed by standard constituency parsing models. We release a dataset of 44k annotated queries (
http://fb.me/semanticparsingdialog), and show that parsing models outperform sequence-to-sequence approaches on this dataset.