Discover, Explain, Improve: An Automatic Slice Detection Benchmark for Natural Language Processing

Wenyue Hua, Lifeng Jin, Linfeng Song, Haitao Mi, Yongfeng Zhang, Dong Yu


Abstract
Pretrained natural language processing (NLP) models have achieved high overall performance, but they still make systematic errors. Instead of manual error analysis, research on slice detection models (SDMs), which automatically identify underperforming groups of datapoints, has caught escalated attention in Computer Vision for both understanding model behaviors and providing insights for future model training and designing. However, little research on SDMs and quantitative evaluation of their effectiveness have been conducted on NLP tasks. Our paper fills the gap by proposing a benchmark named “Discover, Explain, Improve (DEIm)” for classification NLP tasks along with a new SDM Edisa. Edisa discovers coherent and underperforming groups of datapoints; DEIm then unites them under human-understandable concepts and provides comprehensive evaluation tasks and corresponding quantitative metrics. The evaluation in DEIm shows that Edisa can accurately select error-prone datapoints with informative semantic features that summarize error patterns. Detecting difficult datapoints directly boosts model performance without tuning any original model parameters, showing that discovered slices are actionable for users.1
Anthology ID:
2023.tacl-1.87
Volume:
Transactions of the Association for Computational Linguistics, Volume 11
Month:
Year:
2023
Address:
Cambridge, MA
Venue:
TACL
SIG:
Publisher:
MIT Press
Note:
Pages:
1537–1552
Language:
URL:
https://aclanthology.org/2023.tacl-1.87
DOI:
10.1162/tacl_a_00617
Bibkey:
Cite (ACL):
Wenyue Hua, Lifeng Jin, Linfeng Song, Haitao Mi, Yongfeng Zhang, and Dong Yu. 2023. Discover, Explain, Improve: An Automatic Slice Detection Benchmark for Natural Language Processing. Transactions of the Association for Computational Linguistics, 11:1537–1552.
Cite (Informal):
Discover, Explain, Improve: An Automatic Slice Detection Benchmark for Natural Language Processing (Hua et al., TACL 2023)
Copy Citation:
PDF:
https://preview.aclanthology.org/jeptaln-2024-ingestion/2023.tacl-1.87.pdf