Interpretable Structure Induction via Sparse Attention

Ben Peters, Vlad Niculae, André F. T. Martins


Abstract
Neural network methods are experiencing wide adoption in NLP, thanks to their empirical performance on many tasks. Modern neural architectures go way beyond simple feedforward and recurrent models: they are complex pipelines that perform soft, differentiable computation instead of discrete logic. The price of such soft computing is the introduction of dense dependencies, which make it hard to disentangle the patterns that trigger a prediction. Our recent work on sparse and structured latent computation presents a promising avenue for enhancing interpretability of such neural pipelines. Through this extended abstract, we aim to discuss and explore the potential and impact of our methods.
Anthology ID:
W18-5450
Volume:
Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP
Month:
November
Year:
2018
Address:
Brussels, Belgium
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
365–367
Language:
URL:
https://aclanthology.org/W18-5450
DOI:
10.18653/v1/W18-5450
Bibkey:
Cite (ACL):
Ben Peters, Vlad Niculae, and André F. T. Martins. 2018. Interpretable Structure Induction via Sparse Attention. In Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 365–367, Brussels, Belgium. Association for Computational Linguistics.
Cite (Informal):
Interpretable Structure Induction via Sparse Attention (Peters et al., EMNLP 2018)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingestion-script-update/W18-5450.pdf