Analogs of Linguistic Structure in Deep Representations

Jacob Andreas, Dan Klein


Abstract
We investigate the compositional structure of message vectors computed by a deep network trained on a communication game. By comparing truth-conditional representations of encoder-produced message vectors to human-produced referring expressions, we are able to identify aligned (vector, utterance) pairs with the same meaning. We then search for structured relationships among these aligned pairs to discover simple vector space transformations corresponding to negation, conjunction, and disjunction. Our results suggest that neural representations are capable of spontaneously developing a “syntax” with functional analogues to qualitative properties of natural language.
Anthology ID:
D17-1311
Volume:
Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing
Month:
September
Year:
2017
Address:
Copenhagen, Denmark
Venue:
EMNLP
SIG:
SIGDAT
Publisher:
Association for Computational Linguistics
Note:
Pages:
2893–2897
Language:
URL:
https://aclanthology.org/D17-1311
DOI:
10.18653/v1/D17-1311
Bibkey:
Cite (ACL):
Jacob Andreas and Dan Klein. 2017. Analogs of Linguistic Structure in Deep Representations. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2893–2897, Copenhagen, Denmark. Association for Computational Linguistics.
Cite (Informal):
Analogs of Linguistic Structure in Deep Representations (Andreas & Klein, EMNLP 2017)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingestion-script-update/D17-1311.pdf
Video:
 https://vimeo.com/238231647
Code
 jacobandreas/rnn-syn +  additional community code