Interpreting Recurrent and Attention-Based Neural Models: a Case Study on Natural Language Inference

Reza Ghaeini, Xiaoli Fern, Prasad Tadepalli


Abstract
Deep learning models have achieved remarkable success in natural language inference (NLI) tasks. While these models are widely explored, they are hard to interpret and it is often unclear how and why they actually work. In this paper, we take a step toward explaining such deep learning based models through a case study on a popular neural model for NLI. In particular, we propose to interpret the intermediate layers of NLI models by visualizing the saliency of attention and LSTM gating signals. We present several examples for which our methods are able to reveal interesting insights and identify the critical information contributing to the model decisions.
Anthology ID:
D18-1537
Volume:
Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing
Month:
October-November
Year:
2018
Address:
Brussels, Belgium
Editors:
Ellen Riloff, David Chiang, Julia Hockenmaier, Jun’ichi Tsujii
Venue:
EMNLP
SIG:
SIGDAT
Publisher:
Association for Computational Linguistics
Note:
Pages:
4952–4957
Language:
URL:
https://aclanthology.org/D18-1537
DOI:
10.18653/v1/D18-1537
Bibkey:
Cite (ACL):
Reza Ghaeini, Xiaoli Fern, and Prasad Tadepalli. 2018. Interpreting Recurrent and Attention-Based Neural Models: a Case Study on Natural Language Inference. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4952–4957, Brussels, Belgium. Association for Computational Linguistics.
Cite (Informal):
Interpreting Recurrent and Attention-Based Neural Models: a Case Study on Natural Language Inference (Ghaeini et al., EMNLP 2018)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-2024-clasp/D18-1537.pdf
Attachment:
 D18-1537.Attachment.zip