@inproceedings{hiebert-etal-2018-interpreting,
    title = "Interpreting Word-Level Hidden State Behaviour of Character-Level {LSTM} Language Models",
    author = "Hiebert, Avery  and
      Peterson, Cole  and
      Fyshe, Alona  and
      Mehta, Nishant",
    editor = "Linzen, Tal  and
      Chrupa{\l}a, Grzegorz  and
      Alishahi, Afra",
    booktitle = "Proceedings of the 2018 {EMNLP} Workshop {B}lackbox{NLP}: Analyzing and Interpreting Neural Networks for {NLP}",
    month = nov,
    year = "2018",
    address = "Brussels, Belgium",
    publisher = "Association for Computational Linguistics",
    url = "https://preview.aclanthology.org/iwcs-25-ingestion/W18-5428/",
    doi = "10.18653/v1/W18-5428",
    pages = "258--266",
    abstract = "While Long Short-Term Memory networks (LSTMs) and other forms of recurrent neural network have been successfully applied to language modeling on a character level, the hidden state dynamics of these models can be difficult to interpret. We investigate the hidden states of such a model by using the HDBSCAN clustering algorithm to identify points in the text at which the hidden state is similar. Focusing on whitespace characters prior to the beginning of a word reveals interpretable clusters that offer insight into how the LSTM may combine contextual and character-level information to identify parts of speech. We also introduce a method for deriving word vectors from the hidden state representation in order to investigate the word-level knowledge of the model. These word vectors encode meaningful semantic information even for words that appear only once in the training text."
}Markdown (Informal)
[Interpreting Word-Level Hidden State Behaviour of Character-Level LSTM Language Models](https://preview.aclanthology.org/iwcs-25-ingestion/W18-5428/) (Hiebert et al., EMNLP 2018)
ACL