Interpreting Word-Level Hidden State Behaviour of Character-Level LSTM Language Models

Avery Hiebert, Cole Peterson, Alona Fyshe, Nishant Mehta


Abstract
While Long Short-Term Memory networks (LSTMs) and other forms of recurrent neural network have been successfully applied to language modeling on a character level, the hidden state dynamics of these models can be difficult to interpret. We investigate the hidden states of such a model by using the HDBSCAN clustering algorithm to identify points in the text at which the hidden state is similar. Focusing on whitespace characters prior to the beginning of a word reveals interpretable clusters that offer insight into how the LSTM may combine contextual and character-level information to identify parts of speech. We also introduce a method for deriving word vectors from the hidden state representation in order to investigate the word-level knowledge of the model. These word vectors encode meaningful semantic information even for words that appear only once in the training text.
Anthology ID:
W18-5428
Volume:
Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP
Month:
November
Year:
2018
Address:
Brussels, Belgium
Editors:
Tal Linzen, Grzegorz Chrupała, Afra Alishahi
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
258–266
Language:
URL:
https://aclanthology.org/W18-5428
DOI:
10.18653/v1/W18-5428
Bibkey:
Cite (ACL):
Avery Hiebert, Cole Peterson, Alona Fyshe, and Nishant Mehta. 2018. Interpreting Word-Level Hidden State Behaviour of Character-Level LSTM Language Models. In Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 258–266, Brussels, Belgium. Association for Computational Linguistics.
Cite (Informal):
Interpreting Word-Level Hidden State Behaviour of Character-Level LSTM Language Models (Hiebert et al., EMNLP 2018)
Copy Citation:
PDF:
https://preview.aclanthology.org/nschneid-patch-5/W18-5428.pdf