Chandra Khatri


2019

pdf bib
Proceedings of the 1st Workshop on Discourse Structure in Neural NLG
Anusha Balakrishnan | Vera Demberg | Chandra Khatri | Abhinav Rastogi | Donia Scott | Marilyn Walker | Michael White
Proceedings of the 1st Workshop on Discourse Structure in Neural NLG

pdf
Towards Coherent and Engaging Spoken Dialog Response Generation Using Automatic Conversation Evaluators
Sanghyun Yi | Rahul Goel | Chandra Khatri | Alessandra Cervone | Tagyoung Chung | Behnam Hedayatnia | Anu Venkatesh | Raefer Gabriel | Dilek Hakkani-Tur
Proceedings of the 12th International Conference on Natural Language Generation

Encoder-decoder based neural architectures serve as the basis of state-of-the-art approaches in end-to-end open domain dialog systems. Since most of such systems are trained with a maximum likelihood (MLE) objective they suffer from issues such as lack of generalizability and the generic response problem, i.e., a system response that can be an answer to a large number of user utterances, e.g., “Maybe, I don’t know.” Having explicit feedback on the relevance and interestingness of a system response at each turn can be a useful signal for mitigating such issues and improving system quality by selecting responses from different approaches. Towards this goal, we present a system that evaluates chatbot responses at each dialog turn for coherence and engagement. Our system provides explicit turn-level dialog quality feedback, which we show to be highly correlated with human evaluation. To show that incorporating this feedback in the neural response generation models improves dialog quality, we present two different and complementary mechanisms to incorporate explicit feedback into a neural response generation model: reranking and direct modification of the loss function during training. Our studies show that a response generation model that incorporates these combined feedback mechanisms produce more engaging and coherent responses in an open-domain spoken dialog setting, significantly improving the response quality using both automatic and human evaluation.

pdf
Natural Language Generation at Scale: A Case Study for Open Domain Question Answering
Alessandra Cervone | Chandra Khatri | Rahul Goel | Behnam Hedayatnia | Anu Venkatesh | Dilek Hakkani-Tur | Raefer Gabriel
Proceedings of the 12th International Conference on Natural Language Generation

Current approaches to Natural Language Generation (NLG) for dialog mainly focus on domain-specific, task-oriented applications (e.g. restaurant booking) using limited ontologies (up to 20 slot types), usually without considering the previous conversation context. Furthermore, these approaches require large amounts of data for each domain, and do not benefit from examples that may be available for other domains. This work explores the feasibility of applying statistical NLG to scenarios requiring larger ontologies, such as multi-domain dialog applications or open-domain question answering (QA) based on knowledge graphs. We model NLG through an Encoder-Decoder framework using a large dataset of interactions between real-world users and a conversational agent for open-domain QA. First, we investigate the impact of increasing the number of slot types on the generation quality and experiment with different partitions of the QA data with progressively larger ontologies (up to 369 slot types). Second, we perform multi-task learning experiments between open-domain QA and task-oriented dialog, and benchmark our model on a popular NLG dataset. Moreover, we experiment with using the conversational context as an additional input to improve response generation quality. Our experiments show the feasibility of learning statistical NLG models for open-domain QA with larger ontologies.