Encoding of phonology in a recurrent neural model of grounded speech

Afra Alishahi, Marie Barking, Grzegorz Chrupała

[How to correct problems with metadata yourself]


Abstract
We study the representation and encoding of phonemes in a recurrent neural network model of grounded speech. We use a model which processes images and their spoken descriptions, and projects the visual and auditory representations into the same semantic space. We perform a number of analyses on how information about individual phonemes is encoded in the MFCC features extracted from the speech signal, and the activations of the layers of the model. Via experiments with phoneme decoding and phoneme discrimination we show that phoneme representations are most salient in the lower layers of the model, where low-level signals are processed at a fine-grained level, although a large amount of phonological information is retain at the top recurrent layer. We further find out that the attention mechanism following the top recurrent layer significantly attenuates encoding of phonology and makes the utterance embeddings much more invariant to synonymy. Moreover, a hierarchical clustering of phoneme representations learned by the network shows an organizational structure of phonemes similar to those proposed in linguistics.
Anthology ID:
K17-1037
Volume:
Proceedings of the 21st Conference on Computational Natural Language Learning (CoNLL 2017)
Month:
August
Year:
2017
Address:
Vancouver, Canada
Editors:
Roger Levy, Lucia Specia
Venue:
CoNLL
SIG:
SIGNLL
Publisher:
Association for Computational Linguistics
Note:
Pages:
368–378
Language:
URL:
https://aclanthology.org/K17-1037
DOI:
10.18653/v1/K17-1037
Bibkey:
Cite (ACL):
Afra Alishahi, Marie Barking, and Grzegorz Chrupała. 2017. Encoding of phonology in a recurrent neural model of grounded speech. In Proceedings of the 21st Conference on Computational Natural Language Learning (CoNLL 2017), pages 368–378, Vancouver, Canada. Association for Computational Linguistics.
Cite (Informal):
Encoding of phonology in a recurrent neural model of grounded speech (Alishahi et al., CoNLL 2017)
Copy Citation:
PDF:
https://preview.aclanthology.org/teach-a-man-to-fish/K17-1037.pdf
Presentation:
 K17-1037.Presentation.pdf
Code
 gchrupala/encoding-of-phonology
Data
MS COCO