Abstract
A significant number of neural architectures for reading comprehension have recently been developed and evaluated on large cloze-style datasets. We present experiments supporting the emergence of “predication structure” in the hidden state vectors of these readers. More specifically, we provide evidence that the hidden state vectors represent atomic formulas 𝛷c where 𝛷 is a semantic property (predicate) and c is a constant symbol entity identifier.- Anthology ID:
- W17-2604
- Volume:
- Proceedings of the 2nd Workshop on Representation Learning for NLP
- Month:
- August
- Year:
- 2017
- Address:
- Vancouver, Canada
- Venue:
- RepL4NLP
- SIG:
- SIGREP
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 26–36
- Language:
- URL:
- https://aclanthology.org/W17-2604
- DOI:
- 10.18653/v1/W17-2604
- Cite (ACL):
- Hai Wang, Takeshi Onishi, Kevin Gimpel, and David McAllester. 2017. Emergent Predication Structure in Hidden State Vectors of Neural Readers. In Proceedings of the 2nd Workshop on Representation Learning for NLP, pages 26–36, Vancouver, Canada. Association for Computational Linguistics.
- Cite (Informal):
- Emergent Predication Structure in Hidden State Vectors of Neural Readers (Wang et al., RepL4NLP 2017)
- PDF:
- https://preview.aclanthology.org/ingestion-script-update/W17-2604.pdf
- Data
- CBT, Who-did-What