Luis Lastras


2023

pdf
Pointwise Mutual Information Based Metric and Decoding Strategy for Faithful Generation in Document Grounded Dialogs
Yatin Nandwani | Vineet Kumar | Dinesh Raghu | Sachindra Joshi | Luis Lastras
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing

A major concern in using deep learning based generative models for document-grounded dialogs is the potential generation of responses that are not faithful to the underlying document. Existing automated metrics used for evaluating the faithfulness of response with respect to the grounding document measure the degree of similarity between the generated response and the document’s content. However, these automated metrics are far from being well aligned with human judgments. Therefore, to improve the measurement of faithfulness, we propose a new metric that utilizes (Conditional) Point-wise Mutual Information (PMI) between the generated response and the source document, conditioned on the dialogue. PMI quantifies the extent to which the document influences the generated response – with a higher PMI indicating a more faithful response. We build upon this idea to create a new decoding technique that incorporates PMI into the response generation process to predict more faithful responses. Our experiments on the BEGIN benchmark demonstrate an improved correlation of our metric with human evaluation. We also show that our decoding technique is effective in generating more faithful responses when compared to standard decoding techniques on a set of publicly available document-grounded dialog datasets.

2022

pdf
DG2: Data Augmentation Through Document Grounded Dialogue Generation
Qingyang Wu | Song Feng | Derek Chen | Sachindra Joshi | Luis Lastras | Zhou Yu
Proceedings of the 23rd Annual Meeting of the Special Interest Group on Discourse and Dialogue

Collecting data for training dialog systems can be extremely expensive due to the involvement of human participants and the need for extensive annotation. Especially in document-grounded dialog systems, human experts need to carefully read the unstructured documents to answer the users’ questions. As a result, existing document-grounded dialog datasets are relatively small-scale and obstruct the effective training of dialogue systems. In this paper, we propose an automatic data augmentation technique grounded on documents through a generative dialogue model. The dialogue model consists of a user bot and agent bot that can synthesize diverse dialogues given an input document, which is then used to train a downstream model. When supplementing the original dataset, our method achieves significant improvement over traditional data augmentation methods. We also achieve great performance in the low-resource setting.

2021

pdf
Does Structure Matter? Encoding Documents for Machine Reading Comprehension
Hui Wan | Song Feng | Chulaka Gunasekara | Siva Sankalp Patel | Sachindra Joshi | Luis Lastras
Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies

Machine reading comprehension is a challenging task especially for querying documents with deep and interconnected contexts. Transformer-based methods have shown advanced performances on this task; however, most of them still treat documents as a flat sequence of tokens. This work proposes a new Transformer-based method that reads a document as tree slices. It contains two modules for identifying more relevant text passage and the best answer span respectively, which are not only jointly trained but also jointly consulted at inference time. Our evaluation results show that our proposed method outperforms several competitive baseline approaches on two datasets from varied domains.

2020

pdf
Implicit Discourse Relation Classification: We Need to Talk about Evaluation
Najoung Kim | Song Feng | Chulaka Gunasekara | Luis Lastras
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics

Implicit relation classification on Penn Discourse TreeBank (PDTB) 2.0 is a common benchmark task for evaluating the understanding of discourse relations. However, the lack of consistency in preprocessing and evaluation poses challenges to fair comparison of results in the literature. In this work, we highlight these inconsistencies and propose an improved evaluation protocol. Paired with this protocol, we report strong baseline results from pretrained sentence encoders, which set the new state-of-the-art for PDTB 2.0. Furthermore, this work is the first to explore fine-grained relation classification on PDTB 3.0. We expect our work to serve as a point of comparison for future work, and also as an initiative to discuss models of larger context and possible data augmentations for downstream transferability.

pdf
Conversational Document Prediction to Assist Customer Care Agents
Jatin Ganhotra | Haggai Roitman | Doron Cohen | Nathaniel Mills | Chulaka Gunasekara | Yosi Mass | Sachindra Joshi | Luis Lastras | David Konopnicki
Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)

A frequent pattern in customer care conversations is the agents responding with appropriate webpage URLs that address users’ needs. We study the task of predicting the documents that customer care agents can use to facilitate users’ needs. We also introduce a new public dataset which supports the aforementioned problem. Using this dataset and two others, we investigate state-of-the art deep learning (DL) and information retrieval (IR) models for the task. Additionally, we analyze the practicality of such systems in terms of inference time complexity. Our show that an hybrid IR+DL approach provides the best of both worlds.

pdf
doc2dial: A Goal-Oriented Document-Grounded Dialogue Dataset
Song Feng | Hui Wan | Chulaka Gunasekara | Siva Patel | Sachindra Joshi | Luis Lastras
Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)

We introduce doc2dial, a new dataset of goal-oriented dialogues that are grounded in the associated documents. Inspired by how the authors compose documents for guiding end users, we first construct dialogue flows based on the content elements that corresponds to higher-level relations across text sections as well as lower-level relations between discourse units within a section. Then we present these dialogue flows to crowd contributors to create conversational utterances. The dataset includes over 4500 annotated conversations with an average of 14 turns that are grounded in over 450 documents from four domains. Compared to the prior document-grounded dialogue datasets, this dataset covers a variety of dialogue scenes in information-seeking conversations. For evaluating the versatility of the dataset, we introduce multiple dialogue modeling tasks and present baseline approaches.

pdf
Agent Assist through Conversation Analysis
Kshitij Fadnis | Nathaniel Mills | Jatin Ganhotra | Haggai Roitman | Gaurav Pandey | Doron Cohen | Yosi Mass | Shai Erera | Chulaka Gunasekara | Danish Contractor | Siva Patel | Q. Vera Liao | Sachindra Joshi | Luis Lastras | David Konopnicki
Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations

Customer support agents play a crucial role as an interface between an organization and its end-users. We propose CAIRAA: Conversational Approach to Information Retrieval for Agent Assistance, to reduce the cognitive workload of support agents who engage with users through conversation systems. CAIRAA monitors an evolving conversation and recommends both responses and URLs of documents the agent can use in replies to their client. We combine traditional information retrieval (IR) approaches with more recent Deep Learning (DL) models to ensure high accuracy and efficient run-time performance in the deployed system. Here, we describe the CAIRAA system and demonstrate its effectiveness in a pilot study via a short video.