Vasudha Bhatnagar


2023

pdf
Citation-Based Summarization of Landmark Judgments
Purnima Bindal | Vikas Kumar | Vasudha Bhatnagar | Parikshet Sirohi | Ashwini Siwal
Proceedings of the 20th International Conference on Natural Language Processing (ICON)

Landmark judgments are of prime importance in the Common Law System because of their exceptional jurisprudence and frequent references in other judgments. In this work, we leverage contextual references available in citing judgments to create an extractive summary of the target judgment. We evaluate the proposed algorithm on two datasets curated from the judgments of Indian Courts and find the results promising.

pdf
Infusing Knowledge into Large Language Models with Contextual Prompts
Kinshuk Vasisht | Balaji Ganesan | Vikas Kumar | Vasudha Bhatnagar
Proceedings of the 20th International Conference on Natural Language Processing (ICON)

Knowledge infusion is a promising method for enhancing Large Language Models for domainspecific NLP tasks rather than pre-training models over large data from scratch. These augmented LLMs typically depend on additional pre-training or knowledge prompts from an existing knowledge graph, which is impractical in many applications. In contrast, knowledge infusion directly from relevant documents is more generalisable and alleviates the need for structured knowledge graphs while also being useful for entities that are usually not found in any knowledge graph. With this motivation, we propose a simple yet generalisable approach for knowledge infusion by generating prompts from the context in the input text. Our experiments show the effectiveness of our approach which we evaluate by probing the fine-tuned LLMs.

2020

pdf
Divide and Conquer: From Complexity to Simplicity for Lay Summarization
Rochana Chaturvedi | Saachi | Jaspreet Singh Dhani | Anurag Joshi | Ankush Khanna | Neha Tomar | Swagata Duari | Alka Khurana | Vasudha Bhatnagar
Proceedings of the First Workshop on Scholarly Document Processing

We describe our approach for the 1st Computational Linguistics Lay Summary Shared Task CL-LaySumm20. The task is to produce non-technical summaries of scholarly documents. The summary should be within easy grasp of a layman who may not be well versed with the domain of the research article. We propose a two step divide-and-conquer approach. First, we judiciously select segments of the documents that are not overly pedantic and are likely to be of interest to the laity, and over-extract sentences from each segment using an unsupervised network based method. Next, we perform abstractive summarization on these extractions and systematically merge the abstractions. We run ablation studies to establish that each step in our pipeline is critical for improvement in the quality of lay summary. Our approach leverages state-of-the-art pre-trained deep neural network based models as zero-shot learners to achieve high scores on the task.

pdf
NMF Ensembles? Not for Text Summarization!
Alka Khurana | Vasudha Bhatnagar
Proceedings of the First Workshop on Insights from Negative Results in NLP

Non-negative Matrix Factorization (NMF) has been used for text analytics with promising results. Instability of results arising due to stochastic variations during initialization makes a case for use of ensemble technology. However, our extensive empirical investigation indicates otherwise. In this paper, we establish that ensemble summary for single document using NMF is no better than the best base model summary.