Jinghan Yang


2023

pdf
How Many and Which Training Points Would Need to be Removed to Flip this Prediction?
Jinghan Yang | Sarthak Jain | Byron Wallace
Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics

We consider the problem of identifying a {emph{minimal subset} of training data ${mathcal{S}_t$ such that if the instances comprising ${mathcal{S}_t$ had been removed prior to training, the categorization of a given test point $x_t$ would have been different.Identifying such a set may be of interest for a few reasons.First, the cardinality of ${mathcal{S}_t$ provides a measure of robustness (if $|{mathcal{S}_t|$ is small for $x_t$, we might be less confident in the corresponding prediction), which we show is correlated with but complementary to predicted probabilities.Second, interrogation of ${mathcal{S}_t$ may provide a novel mechanism for {emph{contesting} a particular model prediction: If one can make the case that the points in ${mathcal{S}_t$ are wrongly labeled or irrelevant, this may argue for overturning the associated prediction. Identifying ${mathcal{S}_t$ via brute-force is intractable.We propose comparatively fast approximation methods to find ${mathcal{S}_t$ based on {emph{influence functions}, and find that—for simple convex text classification models—these approaches can often successfully identify relatively small sets of training examples which, if removed, would flip the prediction.