Chandan Reddy


2024

pdf bib
Synthesizing Conversations from Unlabeled Documents using Automatic Response Segmentation
Fanyou Wu | Weijie Xu | Chandan Reddy | Srinivasan Sengamedu
Findings of the Association for Computational Linguistics: ACL 2024

In this study, we tackle the challenge of inadequate and costly training data that has hindered the development of conversational question answering (ConvQA) systems. Enterprises have a large corpus of diverse internal documents. Instead of relying on a searching engine, a more compelling approach for people to comprehend these documents is to create a dialogue system. In this paper, we propose a robust dialog synthesising method. We learn the segmentation of data for the dialog task instead of using segmenting at sentence boundaries. The synthetic dataset generated by our proposed method achieves superior quality when compared to WikiDialog, as assessed through machine and human evaluations. By employing our inpainted data for ConvQA retrieval system pre-training, we observed a notable improvement in performance across OR-QuAC benchmarks.

2023

pdf bib
Transformer-based Models for Long-Form Document Matching: Challenges and Empirical Analysis
Akshita Jha | Adithya Samavedhi | Vineeth Rakesh | Jaideep Chandrashekar | Chandan Reddy
Findings of the Association for Computational Linguistics: EACL 2023

Recent advances in the area of long document matching have primarily focused on using transformer-based models for long document encoding and matching. There are two primary challenges associated with these models. Firstly, the performance gain provided by transformer-based models comes at a steep cost – both in terms of the required training time and the resource (memory and energy) consumption. The second major limitation is their inability to handle more than a pre-defined input token length at a time. In this work, we empirically demonstrate the effectiveness of simple neural models (such as feed-forward networks, and CNNs) and simple embeddings (like GloVe, and Paragraph Vector) over transformer-based models on the task of document matching. We show that simple models outperform the more complex BERT-based models while taking significantly less training time, energy, and memory. The simple models are also more robust to variations in document length and text perturbations.