2024
pdf
abs
Attribute Controlled Fine-tuning for Large Language Models: A Case Study on Detoxification
Tao Meng
|
Ninareh Mehrabi
|
Palash Goyal
|
Anil Ramakrishna
|
Aram Galstyan
|
Richard Zemel
|
Kai-Wei Chang
|
Rahul Gupta
|
Charith Peris
Findings of the Association for Computational Linguistics: EMNLP 2024
We propose a constraint learning schema forfine-tuning Large Language Models (LLMs)with attribute control. Given a training corpusand control criteria formulated as a sequence-level constraint on model outputs, our methodfine-tunes the LLM on the training corpus whileenhancing constraint satisfaction with minimalimpact on its utility and generation quality.Specifically, our approach regularizes the LLMtraining by penalizing the KL divergence be-tween the desired output distribution, which sat-isfies the constraints, and the LLM’s posterior.This regularization term can be approximatedby an auxiliary model trained to decomposethe sequence-level constraints into token-levelguidance, allowing the term to be measuredby a closed-form formulation. To further im-prove efficiency, we design a parallel schemefor concurrently updating both the LLM andthe auxiliary model. We evaluate the empiricalperformance of our approach by controlling thetoxicity when training an LLM. We show thatour approach leads to an LLM that producesfewer inappropriate responses while achievingcompetitive performance on benchmarks and atoxicity detection task
pdf
abs
Evaluating Differentially Private Synthetic Data Generation in High-Stakes Domains
Krithika Ramesh
|
Nupoor Gandhi
|
Pulkit Madaan
|
Lisa Bauer
|
Charith Peris
|
Anjalie Field
Findings of the Association for Computational Linguistics: EMNLP 2024
The difficulty of anonymizing text data hinders the development and deployment of NLP in high-stakes domains that involve private data, such as healthcare and social services. Poorly anonymized sensitive data cannot be easily shared with annotators or external researchers, nor can it be used to train public models. In this work, we explore the feasibility of using synthetic data generated from differentially private language models in place of real data to facilitate the development of NLP in these domains without compromising privacy. In contrast to prior work, we generate synthetic data for real high-stakes domains, and we propose and conduct use-inspired evaluations to assess data quality. Our results show that prior simplistic evaluations have failed to highlight utility, privacy, and fairness issues in the synthetic data. Overall, our work underscores the need for further improvements to synthetic data generation for it to be a viable way to enable privacy-preserving data sharing.
pdf
abs
The steerability of large language models toward data-driven personas
Junyi Li
|
Charith Peris
|
Ninareh Mehrabi
|
Palash Goyal
|
Kai-Wei Chang
|
Aram Galstyan
|
Richard Zemel
|
Rahul Gupta
Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)
Large language models (LLMs) are known to generate biased responses where the opinions of certain groups and populations are underrepresented. Here, we present a novel approach to achieve controllable generation of specific viewpoints using LLMs, that can be leveraged to produce multiple perspectives and to reflect the diverse opinions. Moving beyond the traditional reliance on demographics like age, gender, or party affiliation, we introduce a data-driven notion of persona grounded in collaborative filtering, which is defined as either a single individual or a cohort of individuals manifesting similar views across specific inquiries. As individuals in the same demographic group may have different personas, our data-driven persona definition allows for a more nuanced understanding of different (latent) social groups present in the population. In addition to this, we also explore an efficient method to steer LLMs toward the personas that we define. We show that our data-driven personas significantly enhance model steerability, with improvements of between 57%-77% over our best performing baselines.
pdf
abs
Tree-of-Traversals: A Zero-Shot Reasoning Algorithm for Augmenting Black-box Language Models with Knowledge Graphs
Elan Markowitz
|
Anil Ramakrishna
|
Jwala Dhamala
|
Ninareh Mehrabi
|
Charith Peris
|
Rahul Gupta
|
Kai-Wei Chang
|
Aram Galstyan
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Knowledge graphs (KGs) complement Large Language Models (LLMs) by providing reliable, structured, domain-specific, and up-to-date external knowledge. However, KGs and LLMs are often developed separately and must be integrated after training. We introduce Tree-of-Traversals, a novel zero-shot reasoning algorithm that enables augmentation of black-box LLMs with one or more KGs. The algorithm equips a LLM with actions for interfacing a KG and enables the LLM to perform tree search over possible thoughts and actions to find high confidence reasoning paths. Tree-of-Traversals significantly improves performance on question answering and KG question answering tasks. Code is available at https://github.com/amazon-science/tree-of-traversals
2023
pdf
abs
MASSIVE: A 1M-Example Multilingual Natural Language Understanding Dataset with 51 Typologically-Diverse Languages
Jack FitzGerald
|
Christopher Hench
|
Charith Peris
|
Scott Mackie
|
Kay Rottmann
|
Ana Sanchez
|
Aaron Nash
|
Liam Urbach
|
Vishesh Kakarala
|
Richa Singh
|
Swetha Ranganath
|
Laurie Crist
|
Misha Britan
|
Wouter Leeuwis
|
Gokhan Tur
|
Prem Natarajan
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
We present the MASSIVE dataset–Multilingual Amazon Slu resource package (SLURP) for Slot-filling, Intent classification, and Virtual assistant Evaluation. MASSIVE contains 1M realistic, parallel, labeled virtual assistant utterances spanning 51 languages, 18 domains, 60 intents, and 55 slots. MASSIVE was created by tasking professional translators to localize the English-only SLURP dataset into 50 typologically diverse languages from 29 genera. We also present modeling results on XLM-R and mT5, including exact match accuracy, intent classification accuracy, and slot-filling F1 score. We have released our dataset, modeling code, and models publicly.
pdf
abs
Controlling the Extraction of Memorized Data from Large Language Models via Prompt-Tuning
Mustafa Ozdayi
|
Charith Peris
|
Jack FitzGerald
|
Christophe Dupuy
|
Jimit Majmudar
|
Haidar Khan
|
Rahil Parikh
|
Rahul Gupta
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)
Large Language Models (LLMs) are known to memorize significant portions of their training data. Parts of this memorized content have been shown to be extractable by simply querying the model, which poses a privacy risk. We present a novel approach which uses prompt-tuning to control the extraction rates of memorized content in LLMs. We present two prompt training strategies to increase and decrease extraction rates, which correspond to an attack and a defense, respectively. We demonstrate the effectiveness of our techniques by using models from the GPT-Neo family on a public benchmark. For the 1.3B parameter GPT-Neo model, our attack yields a 9.3 percentage point increase in extraction rate compared to our baseline. Our defense can be tuned to achieve different privacy-utility trade-offs by a user-specified hyperparameter. We achieve an extraction rate reduction of up to 97.7% relative to our baseline, with a perplexity increase of 16.9%.
pdf
abs
Coordinated Replay Sample Selection for Continual Federated Learning
Jack Good
|
Jimit Majmudar
|
Christophe Dupuy
|
Jixuan Wang
|
Charith Peris
|
Clement Chung
|
Richard Zemel
|
Rahul Gupta
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing: Industry Track
Continual Federated Learning (CFL) combines Federated Learning (FL), the decentralized learning of a central model on a number of client devices that may not communicate their data, and Continual Learning (CL), the learning of a model from a continual stream of data without keeping the entire history. In CL, the main challenge is forgetting what was learned from past data. While replay-based algorithms that keep a small pool of past training data are effective to reduce forgetting, only simple replay sample selection strategies have been applied to CFL in prior work, and no previous work has explored coordination among clients for better sample selection. To bridge this gap, we adapt a replay sample selection objective based on loss gradient diversity to CFL and propose a new relaxation-based selection of samples to optimize the objective. Next, we propose a practical algorithm to coordinate gradient-based replay sample selection across clients without communicating private data. We benchmark our coordinated and uncoordinated replay sample selection algorithms against random sampling-based baselines with language models trained on a large scale de-identified real-world text dataset. We show that gradient-based sample selection methods both boost performance and reduce forgetting compared to random sampling methods, with our coordination method showing gains early in the low replay size regime (when the budget for storing past data is small).
2022
pdf
bib
abs
Task-driven augmented data evaluation
Olga Golovneva
|
Pan Wei
|
Khadige Abboud
|
Charith Peris
|
Lizhen Tan
|
Haiyang Yu
Proceedings of the 2nd Workshop on Natural Language Generation, Evaluation, and Metrics (GEM)
In the area of data augmentation research, the main focus to date has been on the improvement of the generation models, while the examination and improvements to synthetic data evaluation methods remains less explored. In our work, we explore a number of sentence similarity measures in the context of data generation filtering, and evaluate their impact on the performance of the targeted Natural Language Understanding problem on the example of the intent classification and named entity recognition tasks. Our experiments on ATIS dataset show that the right choice of filtering technique can bring up to 33% in sentence accuracy improvement for targeted underrepresented intents.
pdf
abs
Knowledge Distillation Transfer Sets and their Impact on Downstream NLU Tasks
Charith Peris
|
Lizhen Tan
|
Thomas Gueudre
|
Turan Gojayev
|
Pan Wei
|
Gokmen Oz
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing: Industry Track
Teacher-student knowledge distillation is a popular technique for compressing today’s prevailing large language models into manageable sizes that fit low-latency downstream applications. Both the teacher and the choice of transfer set used for distillation are crucial ingredients in creating a high quality student. Yet, the generic corpora used to pretrain the teacher and the corpora associated with the downstream target domain are often significantly different, which raises a natural question: should the student be distilled over the generic corpora, so as to learn from high-quality teacher predictions, or over the downstream task corpora to align with finetuning? Our study investigates this trade-off using Domain Classification (DC) and Intent Classification/Named Entity Recognition (ICNER) as downstream tasks. We distill several multilingual students from a larger multilingual LM with varying proportions of generic and task-specific datasets, and report their performance after finetuning on DC and ICNER. We observe significant improvements across tasks and test sets when only task-specific corpora is used. We also report on how the impact of adding task-specific data to the transfer set correlates with the similarity between generic and task-specific data. Our results clearly indicate that, while distillation from a generic LM benefits downstream tasks, students learn better using target domain data even if it comes at the price of noisier teacher predictions. In other words, target domain data still trumps teacher knowledge.
pdf
bib
Proceedings of the Massively Multilingual Natural Language Understanding Workshop (MMNLU-22)
Jack FitzGerald
|
Kay Rottmann
|
Julia Hirschberg
|
Mohit Bansal
|
Anna Rumshisky
|
Charith Peris
|
Christopher Hench
Proceedings of the Massively Multilingual Natural Language Understanding Workshop (MMNLU-22)
pdf
abs
Massively Multilingual Natural Language Understanding 2022 (MMNLU-22) Workshop and Competition
Jack FitzGerald
|
Christopher Hench
|
Charith Peris
|
Kay Rottmann
Proceedings of the Massively Multilingual Natural Language Understanding Workshop (MMNLU-22)
To be writen (workshop summary paper)
2020
pdf
abs
Using multiple ASR hypotheses to boost i18n NLU performance
Charith Peris
|
Gokmen Oz
|
Khadige Abboud
|
Venkata sai Varada Varada
|
Prashan Wanigasekara
|
Haidar Khan
Proceedings of the 17th International Conference on Natural Language Processing (ICON)
Current voice assistants typically use the best hypothesis yielded by their Automatic Speech Recognition (ASR) module as input to their Natural Language Understanding (NLU) module, thereby losing helpful information that might be stored in lower-ranked ASR hypotheses. We explore the change in performance of NLU associated tasks when utilizing five-best ASR hypotheses when compared to status quo for two language datasets, German and Portuguese. To harvest information from the ASR five-best, we leverage extractive summarization and joint extractive-abstractive summarization models for Domain Classification (DC) experiments while using a sequence-to-sequence model with a pointer generator network for Intent Classification (IC) and Named Entity Recognition (NER) multi-task experiments. For the DC full test set, we observe significant improvements of up to 7.2% and 15.5% in micro-averaged F1 scores, for German and Portuguese, respectively. In cases where the best ASR hypothesis was not an exact match to the transcribed utterance (mismatched test set), we see improvements of up to 6.7% and 8.8% micro-averaged F1 scores, for German and Portuguese, respectively. For IC and NER multi-task experiments, when evaluating on the mismatched test set, we see improvements across all domains in German and in 17 out of 19 domains in Portuguese (improvements based on change in SeMER scores). Our results suggest that the use of multiple ASR hypotheses, as opposed to one, can lead to significant performance improvements in the DC task for these non-English datasets. In addition, it could lead to significant improvement in the performance of IC and NER tasks in cases where the ASR model makes mistakes.
pdf
abs
Generative Adversarial Networks for Annotated Data Augmentation in Data Sparse NLU
Olga Golovneva
|
Charith Peris
Proceedings of the 17th International Conference on Natural Language Processing (ICON)
Data sparsity is one of the key challenges associated with model development in Natural Language Understanding (NLU) for conversational agents. The challenge is made more complex by the demand for high quality annotated utterances commonly required for supervised learning, usually resulting in weeks of manual labor and high cost. In this paper, we present our results on boosting NLU model performance through training data augmentation using a sequential generative adversarial network (GAN). We explore data generation in the context of two tasks, the bootstrapping of a new language and the handling of low resource features. For both tasks we explore three sequential GAN architectures, one with a token-level reward function, another with our own implementation of a token-level Monte Carlo rollout reward, and a third with sentence-level reward. We evaluate the performance of these feedback models across several sampling methodologies and compare our results to upsampling the original data to the same scale. We further improve the GAN model performance through the transfer learning of the pre-trained embeddings. Our experiments reveal synthetic data generated using the sequential generative adversarial network provides significant performance boosts across multiple metrics and can be a major benefit to the NLU tasks.
pdf
bib
abs
Using Alternate Representations of Text for Natural Language Understanding
Venkat Varada
|
Charith Peris
|
Yangsook Park
|
Christopher Dipersio
Proceedings of the 2nd Workshop on Natural Language Processing for Conversational AI
One of the core components of voice assistants is the Natural Language Understanding (NLU) model. Its ability to accurately classify the user’s request (or “intent”) and recognize named entities in an utterance is pivotal to the success of these assistants. NLU models can be challenged in some languages by code-switching or morphological and orthographic variations. This work explores the possibility of improving the accuracy of NLU models for Indic languages via the use of alternate representations of input text for NLU, specifically ISO-15919 and IndicSOUNDEX, a custom SOUNDEX designed to work for Indic languages. We used a deep neural network based model to incorporate the information from alternate representations into the NLU model. We show that using alternate representations significantly improves the overall performance of NLU models when training data is limited.