Abstract
Many standard tasks in NLP (e.g., Named Entity Recognition, Part-of-Speech tagging, and Semantic Role Labeling) are naturally framed as sequence tagging problems. However, there has been comparatively little work on interpretability methods for sequence tagging models. In this paper, we extend influence functions — which aim to trace predictions back to the training points that informed them — to sequence tagging tasks. We define the influence of a training instance segment as the effect that perturbing the labels within this segment has on a test segment level prediction. We provide an efficient approximation to compute this, and show that it tracks with the “true” segment influence (measured empirically). We show the practical utility of segment influence by using the method to identify noisy annotations in NER corpora.- Anthology ID:
- 2022.findings-emnlp.58
- Volume:
- Findings of the Association for Computational Linguistics: EMNLP 2022
- Month:
- December
- Year:
- 2022
- Address:
- Abu Dhabi, United Arab Emirates
- Editors:
- Yoav Goldberg, Zornitsa Kozareva, Yue Zhang
- Venue:
- Findings
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 824–839
- Language:
- URL:
- https://aclanthology.org/2022.findings-emnlp.58
- DOI:
- 10.18653/v1/2022.findings-emnlp.58
- Cite (ACL):
- Sarthak Jain, Varun Manjunatha, Byron Wallace, and Ani Nenkova. 2022. Influence Functions for Sequence Tagging Models. In Findings of the Association for Computational Linguistics: EMNLP 2022, pages 824–839, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
- Cite (Informal):
- Influence Functions for Sequence Tagging Models (Jain et al., Findings 2022)
- PDF:
- https://preview.aclanthology.org/ml4al-ingestion/2022.findings-emnlp.58.pdf