Jack Lanchantin
2023
Robustness of Named-Entity Replacements for In-Context Learning
Saeed Goodarzi
|
Nikhil Kagita
|
Dennis Minn
|
Shufan Wang
|
Roberto Dessi
|
Shubham Toshniwal
|
Adina Williams
|
Jack Lanchantin
|
Koustuv Sinha
Findings of the Association for Computational Linguistics: EMNLP 2023
A key feature of modern large language models (LLMs) is their ability to perform in-context learning, a prompting technique where query- answer demonstrations are shown before the final query. This allows for generalization to novel distributions at inference time where the LLM can learn new rules without parameter updates. However, the choice of demonstrations and their relationship to a particular query can have a profound impact on model accuracy, raising concerns about the true in-context generalization capabilities (Zhao et al., 2021). In this work, we explore the robustness of the in-context learning paradigm by focusing on entities. In particular, we seek to understand the robustness of LLM in-context learning with respect to named entity replacements. We discover a significant variance in downstream performance based on the choice of the named entities, across three popular reasoning tasks and two popular LLMs. Specifically, model accuracy on the test sets can fluctuate between -2.7 to +8.0 points depending on the choice of named entity replacements. Our analysis exposes the sensitivity of LLM in-context learning with respect to named entities, and offers a simple recipe to improve test performance by hyper-parameter tuning the named entities for a given dataset. Code and datasets for reproducing the results are publicly available.
2020
Reevaluating Adversarial Examples in Natural Language
John Morris
|
Eli Lifland
|
Jack Lanchantin
|
Yangfeng Ji
|
Yanjun Qi
Findings of the Association for Computational Linguistics: EMNLP 2020
State-of-the-art attacks on NLP models lack a shared definition of a what constitutes a successful attack. We distill ideas from past work into a unified framework: a successful natural language adversarial example is a perturbation that fools the model and follows some linguistic constraints. We then analyze the outputs of two state-of-the-art synonym substitution attacks. We find that their perturbations often do not preserve semantics, and 38% introduce grammatical errors. Human surveys reveal that to successfully preserve semantics, we need to significantly increase the minimum cosine similarities between the embeddings of swapped words and between the sentence encodings of original and perturbed sentences. With constraints adjusted to better preserve semantics and grammaticality, the attack success rate drops by over 70 percentage points.
Search
Co-authors
- John Morris 1
- Eli Lifland 1
- Yangfeng Ji 1
- Yanjun Qi 1
- Saeed Goodarzi 1
- show all...