@inproceedings{yang-etal-2023-many,
    title = "How Many and Which Training Points Would Need to be Removed to Flip this Prediction?",
    author = "Yang, Jinghan  and
      Jain, Sarthak  and
      Wallace, Byron C.",
    editor = "Vlachos, Andreas  and
      Augenstein, Isabelle",
    booktitle = "Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics",
    month = may,
    year = "2023",
    address = "Dubrovnik, Croatia",
    publisher = "Association for Computational Linguistics",
    url = "https://preview.aclanthology.org/ingest-emnlp/2023.eacl-main.188/",
    doi = "10.18653/v1/2023.eacl-main.188",
    pages = "2571--2584",
    abstract = "We consider the problem of identifying a \textit{minimal subset} of training data $\mathcal{S}_t$ such that if the instances comprising $\mathcal{S}_t$ had been removed prior to training, the categorization of a given test point $x_t$ would have been different.Identifying such a set may be of interest for a few reasons.First, the cardinality of $\mathcal{S}_t$ provides a measure of robustness (if $|\mathcal{S}_t|$ is small for $x_t$, we might be less confident in the corresponding prediction), which we show is correlated with but complementary to predicted probabilities.Second, interrogation of $\mathcal{S}_t$ may provide a novel mechanism for \textit{contesting} a particular model prediction: If one can make the case that the points in $\mathcal{S}_t$ are wrongly labeled or irrelevant, this may argue for overturning the associated prediction. Identifying $\mathcal{S}_t$ via brute-force is intractable.We propose comparatively fast approximation methods to find $\mathcal{S}_t$ based on \textit{influence functions}, and find that{---}for simple convex text classification models{---}these approaches can often successfully identify relatively small sets of training examples which, if removed, would flip the prediction."
}Markdown (Informal)
[How Many and Which Training Points Would Need to be Removed to Flip this Prediction?](https://preview.aclanthology.org/ingest-emnlp/2023.eacl-main.188/) (Yang et al., EACL 2023)
ACL