Anubrata Das


2025

pdf bib
On Localizing and Deleting Toxic Memories in Large Language Models
Anubrata Das | Manoj Kumar | Ninareh Mehrabi | Anil Ramakrishna | Anna Rumshisky | Kai-Wei Chang | Aram Galstyan | Morteza Ziyadi | Rahul Gupta
Findings of the Association for Computational Linguistics: NAACL 2025

Warning: This paper contains offensive language.Ensuring that large language models (LLMs) do not generate harmful text is critical for their safe deployment. A common failure mode involves producing toxic responses to otherwise innocuous prompts. While various detoxification methods have been proposed, the underlying mechanisms that drive toxic generation in LLMs are not yet fully understood. Our work aims to provide a mechanistic understanding of toxic generation against innocuous-seeming adversarial prompts through the lens of memory localization. We find evidence of localization of toxic memories in the early Multilayer Perceptron (MLP) layers of GPT-2-XL. We further investigate the effects of editing and deleting these toxic memories in MLP layers to reduce toxic generation. Editing significantly reduces toxic generation, from 62.86% to 28.61%. However, this reduction comes with a trade-off in generation quality as perplexity increases from 78.18 on GPT2-XL against the adversarial prompts to 106.06 after editing. Localization-informed deletion achieves a better toxicity-perplexity tradeoff compared to random early layer editing, which reduces toxicity but leads to greater perplexity increases.

pdf bib
Proceedings of the 5th Workshop on Trustworthy NLP (TrustNLP 2025)
Trista Cao | Anubrata Das | Tharindu Kumarage | Yixin Wan | Satyapriya Krishna | Ninareh Mehrabi | Jwala Dhamala | Anil Ramakrishna | Aram Galystan | Anoop Kumar | Rahul Gupta | Kai-Wei Chang
Proceedings of the 5th Workshop on Trustworthy NLP (TrustNLP 2025)

2022

pdf bib
ProtoTEx: Explaining Model Decisions with Prototype Tensors
Anubrata Das | Chitrank Gupta | Venelin Kovatchev | Matthew Lease | Junyi Jessy Li
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

We present ProtoTEx, a novel white-box NLP classification architecture based on prototype networks (Li et al., 2018). ProtoTEx faithfully explains model decisions based on prototype tensors that encode latent clusters of training examples. At inference time, classification decisions are based on the distances between the input text and the prototype tensors, explained via the training examples most similar to the most influential prototypes. We also describe a novel interleaved training algorithm that effectively handles classes characterized by ProtoTEx indicative features. On a propaganda detection task, ProtoTEx accuracy matches BART-large and exceeds BERTlarge with the added benefit of providing faithful explanations. A user study also shows that prototype-based explanations help non-experts to better recognize propaganda in online news.

pdf bib
longhorns at DADC 2022: How many linguists does it take to fool a Question Answering model? A systematic approach to adversarial attacks.
Venelin Kovatchev | Trina Chatterjee | Venkata S Govindarajan | Jifan Chen | Eunsol Choi | Gabriella Chronis | Anubrata Das | Katrin Erk | Matthew Lease | Junyi Jessy Li | Yating Wu | Kyle Mahowald
Proceedings of the First Workshop on Dynamic Adversarial Data Collection

Developing methods to adversarially challenge NLP systems is a promising avenue for improving both model performance and interpretability. Here, we describe the approach of the team “longhorns” on Task 1 of the First Workshop on Dynamic Adversarial Data Collection (DADC), which asked teams to manually fool a model on an Extractive Question Answering task. Our team finished first (pending validation), with a model error rate of 62%. We advocate for a systematic, linguistically informed approach to formulating adversarial questions, and we describe the results of our pilot experiments, as well as our official submission.