2023
pdf
abs
Toward More Accurate and Generalizable Evaluation Metrics for Task-Oriented Dialogs
Abishek Komma
|
Nagesh Panyam Chandrasekarasastry
|
Timothy Leffel
|
Anuj Goyal
|
Angeliki Metallinou
|
Spyros Matsoukas
|
Aram Galstyan
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 5: Industry Track)
Measurement of interaction quality is a critical task for the improvement of large-scale spoken dialog systems. Existing approaches to dialog quality estimation either focus on evaluating the quality of individual turns, or collect dialog-level quality measurements from end users immediately following an interaction. In contrast to these approaches, we introduce a new dialog-level annotation workflow called Dialog Quality Annotation (DQA). DQA expert annotators evaluate the quality of dialogs as a whole, and also label dialogs for attributes such as goal completion and user sentiment. In this contribution, we show that: (i) while dialog quality cannot be completely decomposed into dialog-level attributes, there is a strong relationship between some objective dialog attributes and judgments of dialog quality; (ii) for the task of dialog-level quality estimation, a supervised model trained on dialog-level annotations outperforms methods based purely on aggregating turn-level features; and (iii) the proposed evaluation model shows better domain generalization ability compared to the baselines. On the basis of these results, we argue that having high-quality human-annotated data is an important component of evaluating interaction quality for large industrial-scale voice assistant platforms.
pdf
abs
Multi-User MultiWOZ: Task-Oriented Dialogues among Multiple Users
Yohan Jo
|
Xinyan Zhao
|
Arijit Biswas
|
Nikoletta Basiou
|
Vincent Auvray
|
Nikolaos Malandrakis
|
Angeliki Metallinou
|
Alexandros Potamianos
Findings of the Association for Computational Linguistics: EMNLP 2023
While most task-oriented dialogues assume conversations between the agent and one user at a time, dialogue systems are increasingly expected to communicate with multiple users simultaneously who make decisions collaboratively. To facilitate development of such systems, we release the Multi-User MultiWOZ dataset: task-oriented dialogues among two users and one agent. To collect this dataset, each user utterance from MultiWOZ 2.2 was replaced with a small chat between two users that is semantically and pragmatically consistent with the original user utterance, thus resulting in the same dialogue state and system response. These dialogues reflect interesting dynamics of collaborative decision-making in task-oriented scenarios, e.g., social chatter and deliberation. Supported by this data, we propose the novel task of multi-user contextual query rewriting: to rewrite a task-oriented chat between two users as a concise task-oriented query that retains only task-relevant information and that is directly consumable by the dialogue system. We demonstrate that in multi-user dialogues, using predicted rewrites substantially improves dialogue state tracking without modifying existing dialogue systems that are trained for single-user dialogues. Further, this method surpasses training a medium-sized model directly on multi-user dialogues and generalizes to unseen domains.
2021
pdf
abs
Alexa Conversations: An Extensible Data-driven Approach for Building Task-oriented Dialogue Systems
Anish Acharya
|
Suranjit Adhikari
|
Sanchit Agarwal
|
Vincent Auvray
|
Nehal Belgamwar
|
Arijit Biswas
|
Shubhra Chandra
|
Tagyoung Chung
|
Maryam Fazel-Zarandi
|
Raefer Gabriel
|
Shuyang Gao
|
Rahul Goel
|
Dilek Hakkani-Tur
|
Jan Jezabek
|
Abhay Jha
|
Jiun-Yu Kao
|
Prakash Krishnan
|
Peter Ku
|
Anuj Goyal
|
Chien-Wei Lin
|
Qing Liu
|
Arindam Mandal
|
Angeliki Metallinou
|
Vishal Naik
|
Yi Pan
|
Shachi Paul
|
Vittorio Perera
|
Abhishek Sethi
|
Minmin Shen
|
Nikko Strom
|
Eddie Wang
Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: Demonstrations
Traditional goal-oriented dialogue systems rely on various components such as natural language understanding, dialogue state tracking, policy learning and response generation. Training each component requires annotations which are hard to obtain for every new domain, limiting scalability of such systems. Similarly, rule-based dialogue systems require extensive writing and maintenance of rules and do not scale either. End-to-End dialogue systems, on the other hand, do not require module-specific annotations but need a large amount of data for training. To overcome these problems, in this demo, we present Alexa Conversations, a new approach for building goal-oriented dialogue systems that is scalable, extensible as well as data efficient. The components of this system are trained in a data-driven manner, but instead of collecting annotated conversations for training, we generate them using a novel dialogue simulator based on a few seed dialogues and specifications of APIs and entities provided by the developer. Our approach provides out-of-the-box support for natural conversational phenomenon like entity sharing across turns or users changing their mind during conversation without requiring developers to provide any such dialogue flows. We exemplify our approach using a simple pizza ordering task and showcase its value in reducing the developer burden for creating a robust experience. Finally, we evaluate our system using a typical movie ticket booking task integrated with live APIs and show that the dialogue simulator is an essential component of the system that leads to over 50% improvement in turn-level action signature prediction accuracy.
2019
pdf
abs
Simple Question Answering with Subgraph Ranking and Joint-Scoring
Wenbo Zhao
|
Tagyoung Chung
|
Anuj Goyal
|
Angeliki Metallinou
Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)
Knowledge graph based simple question answering (KBSQA) is a major area of research within question answering. Although only dealing with simple questions, i.e., questions that can be answered through a single knowledge base (KB) fact, this task is neither simple nor close to being solved. Targeting on the two main steps, subgraph selection and fact selection, the literature has developed sophisticated approaches. However, the importance of subgraph ranking and leveraging the subject–relation dependency of a KB fact have not been sufficiently explored. Motivated by this, we present a unified framework to describe and analyze existing approaches. Using this framework as a starting point we focus on two aspects: improving subgraph selection through a novel ranking method, and leveraging the subject–relation dependency by proposing a joint scoring CNN model with a novel loss function that enforces the well-order of scores. Our methods achieve a new state of the art (85.44% in accuracy) on the SimpleQuestions dataset.
pdf
abs
Controlled Text Generation for Data Augmentation in Intelligent Artificial Agents
Nikolaos Malandrakis
|
Minmin Shen
|
Anuj Goyal
|
Shuyang Gao
|
Abhishek Sethi
|
Angeliki Metallinou
Proceedings of the 3rd Workshop on Neural Generation and Translation
Data availability is a bottleneck during early stages of development of new capabilities for intelligent artificial agents. We investigate the use of text generation techniques to augment the training data of a popular commercial artificial agent across categories of functionality, with the goal of faster development of new functionality. We explore a variety of encoder-decoder generative models for synthetic training data generation and propose using conditional variational auto-encoders. Our approach requires only direct optimization, works well with limited data and significantly outperforms the previous controlled text generation techniques. Further, the generated data are used as additional training samples in an extrinsic intent classification task, leading to improved performance by up to 5% absolute f-score in low-resource cases, validating the usefulness of our approach.
2018
pdf
abs
Fast and Scalable Expansion of Natural Language Understanding Functionality for Intelligent Agents
Anuj Kumar Goyal
|
Angeliki Metallinou
|
Spyros Matsoukas
Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 3 (Industry Papers)
Fast expansion of natural language functionality of intelligent virtual agents is critical for achieving engaging and informative interactions. However, developing accurate models for new natural language domains is a time and data intensive process. We propose efficient deep neural network architectures that maximally re-use available resources through transfer learning. Our methods are applied for expanding the understanding capabilities of a popular commercial agent and are evaluated on hundreds of new domains, designed by internal or external developers. We demonstrate that our proposed methods significantly increase accuracy in low resource settings and enable rapid development of accurate models with less data.
2014
pdf
Syllable and language model based features for detecting non-scorable tests in spoken language proficiency assessment applications
Angeliki Metallinou
|
Jian Cheng
Proceedings of the Ninth Workshop on Innovative Use of NLP for Building Educational Applications
2013
pdf
Discriminative state tracking for spoken dialog systems
Angeliki Metallinou
|
Dan Bohus
|
Jason Williams
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)