Nikhil Verma


2024

pdf
Personal Large Language Model Agents: A Case Study on Tailored Travel Planning
Harmanpreet Singh | Nikhil Verma | Yixiao Wang | Manasa Bharadwaj | Homa Fashandi | Kevin Ferreira | Chul Lee
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing: Industry Track

Large Language Models (LLMs) have made significant progress, becoming more autonomous and capable of handling real-world tasks through their access to tools, various planning strategies, and memory, referred to as LLM agents. One emerging area of focus is customizing these models to cater to individual user preferences, thereby shaping them into personal LLM agents. This work investigates how the user model, which encapsulates user-related information, preferences, and personal concepts, influences an LLM agent’s planning and reasoning capabilities. We introduce a personalized version of TravelPlanner, called TravelPlanner+, and establish baselines for personal LLM agents. Our evaluation strategy contains an LLM-as-a-Judge component, which provides further in-depth insights into the decision-making process of a personal LLM agent by comparing generic and personal plans. Our findings reveal that while generic plans perform robustly, personal plans show marked improvement in relevance and suitability, with preference rates up to 74.4% on validation and 87.3% on the test set. These results highlight the potential of personal LLM agents to significantly enhance user satisfaction.

2023

pdf
COUNT: COntrastive UNlikelihood Text Style Transfer for Text Detoxification
Mohammad Mahdi Abdollah Pour | Parsa Farinneya | Manasa Bharadwaj | Nikhil Verma | Ali Pesaranghader | Scott Sanner
Findings of the Association for Computational Linguistics: EMNLP 2023

Offensive and toxic text on social media platforms can lead to polarization and divisiveness within online communities and hinders constructive dialogue. Text detoxification is a crucial task in natural language processing to ensure the generation of non-toxic and safe text. Text detoxification is a special case of the Text Style Transfer (TST) problem, where an input text is rephrased to an output text that preserves its content while modifying the style (in this case to a more neutral, non-toxic style). State-of-the-art methods for detoxification use supervised training of encoder-decoder models to produce gold-standard outputs with a standard likelihood-based objective. However, it can be hard for these models to deviate from their pretrained auto-encoder identity mapping. While previous methods have used unlikelihood-based losses to penalize input-to-output copying of toxic content, these methods also unfortunately penalize non-toxic content in the input that would be fine to preserve in the output. To address these issues, we introduce a novel contrastive unlikelihood objective (COUNT) that directly contrasts the gold standard rephrasing with the identity input-to-output mapping to effectively isolate and focus learning on non-toxic style transfer. We benchmark COUNT on two parallel datasets, ParaDetox and APPDIA, showing that it achieves significant improvements in jointly combined fluency, content preservation, and detoxification (i.e., the highest “J” score).

2020

pdf
Neural Conversational QA: Learning to Reason vs Exploiting Patterns
Nikhil Verma | Abhishek Sharma | Dhiraj Madan | Danish Contractor | Harshit Kumar | Sachindra Joshi
Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)

Neural Conversational QA tasks such as ShARC require systems to answer questions based on the contents of a given passage. On studying recent state-of-the-art models on the ShARC QA task, we found indications that the model(s) learn spurious clues/patterns in the data-set. Further, a heuristic-based program, built to exploit these patterns, had comparative performance to that of the neural models. In this paper we share our findings about the four types of patterns in the ShARC corpus and how the neural models exploit them. Motivated by the above findings, we create and share a modified data-set that has fewer spurious patterns than the original data-set, consequently allowing models to learn better.