The Emergence of Semantics in Neural Network Representations of Visual Information

Dhanush Dharmaretnam, Alona Fyshe


Abstract
Word vector models learn about semantics through corpora. Convolutional Neural Networks (CNNs) can learn about semantics through images. At the most abstract level, some of the information in these models must be shared, as they model the same real-world phenomena. Here we employ techniques previously used to detect semantic representations in the human brain to detect semantic representations in CNNs. We show the accumulation of semantic information in the layers of the CNN, and discover that, for misclassified images, the correct class can be recovered in intermediate layers of a CNN.
Anthology ID:
N18-2122
Volume:
Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers)
Month:
June
Year:
2018
Address:
New Orleans, Louisiana
Editors:
Marilyn Walker, Heng Ji, Amanda Stent
Venue:
NAACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
776–780
Language:
URL:
https://aclanthology.org/N18-2122
DOI:
10.18653/v1/N18-2122
Bibkey:
Cite (ACL):
Dhanush Dharmaretnam and Alona Fyshe. 2018. The Emergence of Semantics in Neural Network Representations of Visual Information. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 776–780, New Orleans, Louisiana. Association for Computational Linguistics.
Cite (Informal):
The Emergence of Semantics in Neural Network Representations of Visual Information (Dharmaretnam & Fyshe, NAACL 2018)
Copy Citation:
PDF:
https://preview.aclanthology.org/nschneid-patch-4/N18-2122.pdf
Note:
 N18-2122.Notes.pdf
Data
ImageNet