2023
pdf
abs
Adversarial Robustness for Large Language NER models using Disentanglement and Word Attributions
Xiaomeng Jin
|
Bhanukiran Vinzamuri
|
Sriram Venkatapathy
|
Heng Ji
|
Pradeep Natarajan
Findings of the Association for Computational Linguistics: EMNLP 2023
Large language models (LLM’s) have been widely used for several applications such as question answering, text classification and clustering. While the preliminary results across the aforementioned tasks looks promising, recent work has dived deep into LLM’s performing poorly for complex Named Entity Recognition (NER) tasks in comparison to fine-tuned pre-trained language models (PLM’s). To enhance wider adoption of LLM’s, our paper investigates the robustness of such LLM NER models and its instruction fine-tuned variants to adversarial attacks. In particular, we propose a novel attack which relies on disentanglement and word attribution techniques where the former aids in learning an embedding capturing both entity and non-entity influences separately, and the latter aids in identifying important words across both components. This is in stark contrast to most techniques which primarily leverage non-entity words for perturbations limiting the space being explored to synthesize effective adversarial examples. Adversarial training results based on our method improves the F1 score over original LLM NER model by 8% and 18% on CoNLL-2003 and Ontonotes 5.0 datasets respectively.
2022
pdf
abs
FPI: Failure Point Isolation in Large-scale Conversational Assistants
Rinat Khaziev
|
Usman Shahid
|
Tobias Röding
|
Rakesh Chada
|
Emir Kapanci
|
Pradeep Natarajan
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: Industry Track
Large-scale conversational assistants such as Cortana, Alexa, Google Assistant and Siri process requests through a series of modules for wake word detection, speech recognition, language understanding and response generation. An error in one of these modules can cascade through the system. Given the large traffic volumes in these assistants, it is infeasible to manually analyze the data, identify requests with processing errors and isolate the source of error. We present a machine learning system to address this challenge. First, we embed the incoming request and context, such as system response and subsequent turns, using pre-trained transformer models. Then, we combine these embeddings with encodings of additional metadata features (such as confidence scores from different modules in the online system) using a “mixing-encoder” to output the failure point predictions. Our system obtains 92.2% of human performance on this task while scaling to analyze the entire traffic in 8 different languages of a large-scale conversational assistant. We present detailed ablation studies analyzing the impact of different modeling choices.
pdf
abs
Improving Large-Scale Conversational Assistants using Model Interpretation based Training Sample Selection
Stefan Schroedl
|
Manoj Kumar
|
Kiana Hajebi
|
Morteza Ziyadi
|
Sriram Venkatapathy
|
Anil Ramakrishna
|
Rahul Gupta
|
Pradeep Natarajan
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing: Industry Track
This paper presents an approach to identify samples from live traffic where the customer implicitly communicated satisfaction with Alexa’s responses, by leveraging interpretations of model behavior. Such customer signals are noisy and adding a large number of samples from live traffic to training set makes re-training infeasible. Our work addresses these challenges by identifying a small number of samples that grow training set by ~0.05% while producing statistically significant improvements in both offline and online tests.
pdf
abs
CGF: Constrained Generation Framework for Query Rewriting in Conversational AI
Jie Hao
|
Yang Liu
|
Xing Fan
|
Saurabh Gupta
|
Saleh Soltan
|
Rakesh Chada
|
Pradeep Natarajan
|
Chenlei Guo
|
Gokhan Tur
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing: Industry Track
In conversational AI agents, Query Rewriting (QR) plays a crucial role in reducing user frictions and satisfying their daily demands. User frictions are caused by various reasons, such as errors in the conversational AI system, users’ accent or their abridged language. In this work, we present a novel Constrained Generation Framework (CGF) for query rewriting at both global and personalized levels. It is based on the encoder-decoder framework, where the encoder takes the query and its previous dialogue turns as the input to form a context-enhanced representation, and the decoder uses constrained decoding to generate the rewrites based on the pre-defined global or personalized constrained decoding space. Extensive offline and online A/B experiments show that the proposed CGF significantly boosts the query rewriting performance.
2021
pdf
Error Detection in Large-Scale Natural Language Understanding Systems Using Transformer Models
Rakesh Chada
|
Pradeep Natarajan
|
Darshan Fofadiya
|
Prathap Ramachandra
Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021
pdf
abs
FewshotQA: A simple framework for few-shot learning of question answering tasks using pre-trained text-to-text models
Rakesh Chada
|
Pradeep Natarajan
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing
The task of learning from only a few examples (called a few-shot setting) is of key importance and relevance to a real-world setting. For question answering (QA), the current state-of-the-art pre-trained models typically need fine-tuning on tens of thousands of examples to obtain good results. Their performance degrades significantly in a few-shot setting (< 100 examples). To address this, we propose a simple fine-tuning framework that leverages pre-trained text-to-text models and is directly aligned with their pre-training framework. Specifically, we construct the input as a concatenation of the question, a mask token representing the answer span and a context. Given this input, the model is fine-tuned using the same objective as that of its pre-training objective. Through experimental studies on various few-shot configurations, we show that this formulation leads to significant gains on multiple QA benchmarks (an absolute gain of 34.2 F1 points on average when there are only 16 training examples). The gains extend further when used with larger models (Eg:- 72.3 F1 on SQuAD using BART-large with only 32 examples) and translate well to a multilingual setting . On the multilingual TydiQA benchmark, our model outperforms the XLM-Roberta-large by an absolute margin of upto 40 F1 points and an average of 33 F1 points in a few-shot setting (<= 64 training examples). We conduct detailed ablation studies to analyze factors contributing to these gains.