Abstract
We propose a new self-explainable model for Natural Language Processing (NLP) text classification tasks. Our approach constructs explanations concurrently with the formulation of classification predictions. To do so, we extract a rationale from the text, then use it to predict a concept of interest as the final prediction. We provide three types of explanations: 1) rationale extraction, 2) a measure of feature importance, and 3) clustering of concepts. In addition, we show how our model can be compressed without applying complicated compression techniques. We experimentally demonstrate our explainability approach on a number of well-known text classification datasets.- Anthology ID:
- 2020.coling-main.286
- Volume:
- Proceedings of the 28th International Conference on Computational Linguistics
- Month:
- December
- Year:
- 2020
- Address:
- Barcelona, Spain (Online)
- Editors:
- Donia Scott, Nuria Bel, Chengqing Zong
- Venue:
- COLING
- SIG:
- Publisher:
- International Committee on Computational Linguistics
- Note:
- Pages:
- 3214–3224
- Language:
- URL:
- https://aclanthology.org/2020.coling-main.286
- DOI:
- 10.18653/v1/2020.coling-main.286
- Cite (ACL):
- Housam Khalifa Bashier, Mi-Young Kim, and Randy Goebel. 2020. RANCC: Rationalizing Neural Networks via Concept Clustering. In Proceedings of the 28th International Conference on Computational Linguistics, pages 3214–3224, Barcelona, Spain (Online). International Committee on Computational Linguistics.
- Cite (Informal):
- RANCC: Rationalizing Neural Networks via Concept Clustering (Bashier et al., COLING 2020)
- PDF:
- https://preview.aclanthology.org/nschneid-patch-4/2020.coling-main.286.pdf
- Code
- housamkhalifa/rancc
- Data
- AG News, IMDb Movie Reviews