Prasanna Sattigeri


2025

pdf bib
Evaluating the Prompt Steerability of Large Language Models
Erik Miehling | Michael Desmond | Karthikeyan Natesan Ramamurthy | Elizabeth M. Daly | Kush R. Varshney | Eitan Farchi | Pierre Dognin | Jesus Rios | Djallel Bouneffouf | Miao Liu | Prasanna Sattigeri
Proceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)

Building pluralistic AI requires designing models that are able to be shaped to represent a wide range of value systems and cultures. Achieving this requires first being able to evaluate the degree to which a given model is capable of reflecting various personas. To this end, we propose a benchmark for evaluating the steerability of model personas as a function of prompting. Our design is based on a formal definition of prompt steerability, which analyzes the degree to which a model’s joint behavioral distribution can be shifted from its baseline. By defining steerability indices and inspecting how these indices change as a function of steering effort, we can estimate the steerability of a model across various persona dimensions and directions. Our benchmark reveals that the steerability of many current models is limited — due to both a skew in their baseline behavior and an asymmetry in their steerability across many persona dimensions. We release an implementation of our benchmark at https://github.com/IBM/prompt-steering.

pdf bib
Granite Guardian: Comprehensive LLM Safeguarding
Inkit Padhi | Manish Nagireddy | Giandomenico Cornacchia | Subhajit Chaudhury | Tejaswini Pedapati | Pierre Dognin | Keerthiram Murugesan | Erik Miehling | Martín Santillán Cooper | Kieran Fraser | Giulio Zizzo | Muhammad Zaid Hameed | Mark Purcell | Michael Desmond | Qian Pan | Inge Vejsbjerg | Elizabeth M. Daly | Michael Hind | Werner Geyer | Ambrish Rawat | Kush R. Varshney | Prasanna Sattigeri
Proceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 3: Industry Track)

The deployment of language models in real-world applications exposes users to various risks, including hallucinations and harmful or unethical content. These challenges highlight the urgent need for robust safeguards to ensure safe and responsible AI. To address this, we introduce Granite Guardian, a suite of advanced models designed to detect and mitigate risks associated with prompts and responses, enabling seamless integration with any large language model (LLM). Unlike existing open-source solutions, our Granite Guardian models provide comprehensive coverage across a wide range of risk dimensions, including social bias, profanity, violence, sexual content, unethical behavior, jailbreaking, and hallucination-related issues such as context relevance, groundedness, and answer accuracy in retrieval-augmented generation (RAG) scenarios. Trained on a unique dataset combining diverse human annotations and synthetic data, Granite Guardian excels in identifying risks often overlooked by traditional detection systems, particularly jailbreak attempts and RAG-specific challenges. https://github.com/ibm-granite/granite-guardian

2024

pdf bib
Value Alignment from Unstructured Text
Inkit Padhi | Karthikeyan Natesan Ramamurthy | Prasanna Sattigeri | Manish Nagireddy | Pierre Dognin | Kush R. Varshney
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing: Industry Track

Aligning large language models (LLMs) to value systems has emerged as a significant area of research within the fields of AI and NLP. Currently, this alignment process relies on the availability of high-quality supervised and preference data, which can be both time-consuming and expensive to curate or annotate. In this paper, we introduce a systematic end-to-end methodology for aligning LLMs to the implicit and explicit values represented in unstructured text data. Our proposed approach leverages the use of scalable synthetic data generation techniques to effectively align the model to the values present in the unstructured data. Through two distinct use-cases, we demonstrate the efficiency of our methodology on the Mistral-7B-Instruct model. Our approach credibly aligns LLMs to the values embedded within documents, and shows improved performance against other approaches, as quantified through the use of automatic metrics and win rates.

pdf bib
Language Models in Dialogue: Conversational Maxims for Human-AI Interactions
Erik Miehling | Manish Nagireddy | Prasanna Sattigeri | Elizabeth M. Daly | David Piorkowski | John T. Richards
Findings of the Association for Computational Linguistics: EMNLP 2024

Modern language models, while sophisticated, exhibit some inherent shortcomings, particularly in conversational settings. We claim that many of the observed shortcomings can be attributed to violation of one or more conversational principles. By drawing upon extensive research from both the social science and AI communities, we propose a set of maxims – quantity, quality, relevance, manner, benevolence, and transparency – for describing effective human-AI conversation. We first justify the applicability of the first four maxims (from Grice) in the context of human-AI interactions. We then argue that two new maxims, benevolence (concerning the generation of, and engagement with, harmful content) and transparency (concerning recognition of one’s knowledge boundaries, operational constraints, and intents), are necessary for addressing behavior unique to modern human-AI interactions. We evaluate the degree to which various language models are able to understand these maxims and find that models possess an internal prioritization of principles that can significantly impact accurate interpretability of the maxims.

2023

pdf bib
Reliable Gradient-free and Likelihood-free Prompt Tuning
Maohao Shen | Soumya Ghosh | Prasanna Sattigeri | Subhro Das | Yuheng Bu | Gregory Wornell
Findings of the Association for Computational Linguistics: EACL 2023

Due to privacy or commercial constraints, large pre-trained language models (PLMs) are often offered as black-box APIs. Fine-tuning such models to downstream tasks is challenging because one can neither access the model’s internal representations nor propagate gradients through it. This paper addresses these challenges by developing techniques for adapting PLMs with only API access. Building on recent work on soft prompt tuning, we develop methods to tune the soft prompts without requiring gradient computation. Further, we develop extensions that in addition to not requiring gradients also do not need to access any internal representation of the PLM beyond the input embeddings. Moreover, instead of learning a single prompt, our methods learn a distribution over prompts allowing us to quantify predictive uncertainty. Ours is the first work to consider uncertainty in prompts when only having API access to the PLM. Finally, through extensive experiments, we carefully vet the proposed methods and find them competitive with (and sometimes even improving on) gradient-based approaches with full access to the PLM.

2016

pdf bib
Sparsifying Word Representations for Deep Unordered Sentence Modeling
Prasanna Sattigeri | Jayaraman J. Thiagarajan
Proceedings of the 1st Workshop on Representation Learning for NLP