Improving the Faithfulness of Attention-based Explanations with Task-specific Information for Text Classification

George Chrysostomou, Nikolaos Aletras


Abstract
Neural network architectures in natural language processing often use attention mechanisms to produce probability distributions over input token representations. Attention has empirically been demonstrated to improve performance in various tasks, while its weights have been extensively used as explanations for model predictions. Recent studies (Jain and Wallace, 2019; Serrano and Smith, 2019; Wiegreffe and Pinter, 2019) have showed that it cannot generally be considered as a faithful explanation (Jacovi and Goldberg, 2020) across encoders and tasks. In this paper, we seek to improve the faithfulness of attention-based explanations for text classification. We achieve this by proposing a new family of Task-Scaling (TaSc) mechanisms that learn task-specific non-contextualised information to scale the original attention weights. Evaluation tests for explanation faithfulness, show that the three proposed variants of TaSc improve attention-based explanations across two attention mechanisms, five encoders and five text classification datasets without sacrificing predictive performance. Finally, we demonstrate that TaSc consistently provides more faithful attention-based explanations compared to three widely-used interpretability techniques.
Anthology ID:
2021.acl-long.40
Volume:
Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)
Month:
August
Year:
2021
Address:
Online
Editors:
Chengqing Zong, Fei Xia, Wenjie Li, Roberto Navigli
Venues:
ACL | IJCNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
477–488
Language:
URL:
https://aclanthology.org/2021.acl-long.40
DOI:
10.18653/v1/2021.acl-long.40
Bibkey:
Cite (ACL):
George Chrysostomou and Nikolaos Aletras. 2021. Improving the Faithfulness of Attention-based Explanations with Task-specific Information for Text Classification. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 477–488, Online. Association for Computational Linguistics.
Cite (Informal):
Improving the Faithfulness of Attention-based Explanations with Task-specific Information for Text Classification (Chrysostomou & Aletras, ACL-IJCNLP 2021)
Copy Citation:
PDF:
https://preview.aclanthology.org/emnlp-22-attachments/2021.acl-long.40.pdf
Optional supplementary material:
 2021.acl-long.40.OptionalSupplementaryMaterial.pdf
Video:
 https://preview.aclanthology.org/emnlp-22-attachments/2021.acl-long.40.mp4
Data
IMDb Movie ReviewsSST