Abstract
Understanding the internal reasoning behind the predictions of machine learning systems is increasingly vital, given their rising adoption and acceptance. While previous approaches, such as LIME generate algorithmic explanations by attributing importance to input features for individual examples, recent research indicates that practitioners prefer examining language explanations that explain sub-groups of examples (Lakkaraju et al., 2022). In this paper, we introduce MaNtLE, a model-agnostic natural language explainer that analyzes a set of classifier predictions and generates faithful natural language explanations of classifier rationale for structured classification tasks. MaNtLE uses multi-task training on thousands of synthetic classification tasks to generate faithful explanations. Our experiments indicate that, on average, MaNtLE-generated explanations are at least 11% more faithful compared to LIME and Anchors explanations across three tasks. Human evaluations demonstrate that users can better predict model behavior using explanations from MaNtLE compared to other techniques.- Anthology ID:
- 2023.emnlp-main.832
- Volume:
- Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing
- Month:
- December
- Year:
- 2023
- Address:
- Singapore
- Editors:
- Houda Bouamor, Juan Pino, Kalika Bali
- Venue:
- EMNLP
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 13493–13511
- Language:
- URL:
- https://aclanthology.org/2023.emnlp-main.832
- DOI:
- 10.18653/v1/2023.emnlp-main.832
- Cite (ACL):
- Rakesh Menon, Kerem Zaman, and Shashank Srivastava. 2023. MaNtLE: Model-agnostic Natural Language Explainer. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 13493–13511, Singapore. Association for Computational Linguistics.
- Cite (Informal):
- MaNtLE: Model-agnostic Natural Language Explainer (Menon et al., EMNLP 2023)
- PDF:
- https://preview.aclanthology.org/nschneid-patch-1/2023.emnlp-main.832.pdf