Jana Leible
2019
On the Realization of Compositionality in Neural Networks
Joris Baan
|
Jana Leible
|
Mitja Nikolaus
|
David Rau
|
Dennis Ulmer
|
Tim Baumgärtner
|
Dieuwke Hupkes
|
Elia Bruni
Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP
We present a detailed comparison of two types of sequence to sequence models trained to conduct a compositional task. The models are architecturally identical at inference time, but differ in the way that they are trained: our baseline model is trained with a task-success signal only, while the other model receives additional supervision on its attention mechanism (Attentive Guidance), which has shown to be an effective method for encouraging more compositional solutions. We first confirm that the models with attentive guidance indeed infer more compositional solutions than the baseline, by training them on the lookup table task presented by Liska et al. (2019). We then do an in-depth analysis of the structural differences between the two model types, focusing in particular on the organisation of the parameter space and the hidden layer activations and find noticeable differences in both these aspects. Guided networks focus more on the components of the input rather than the sequence as a whole and develop small functional groups of neurons with specific purposes that use their gates more selectively. Results from parameter heat maps, component swapping and graph analysis also indicate that guided networks exhibit a more modular structure with a small number of specialized, strongly connected neurons.
Search
Co-authors
- Joris Baan 1
- Mitja Nikolaus 1
- David Rau 1
- Dennis Ulmer 1
- Tim Baumgärtner 1
- show all...