Abstract
The success of pretrained contextual encoders, such as ELMo and BERT, has brought a great deal of interest in what these models learn: do they, without explicit supervision, learn to encode meaningful notions of linguistic structure? If so, how is this structure encoded? To investigate this, we introduce latent subclass learning (LSL): a modification to classifier-based probing that induces a latent categorization (or ontology) of the probe’s inputs. Without access to fine-grained gold labels, LSL extracts emergent structure from input representations in an interpretable and quantifiable form. In experiments, we find strong evidence of familiar categories, such as a notion of personhood in ELMo, as well as novel ontological distinctions, such as a preference for fine-grained semantic roles on core arguments. Our results provide unique new evidence of emergent structure in pretrained encoders, including departures from existing annotations which are inaccessible to earlier methods.- Anthology ID:
- 2020.emnlp-main.552
- Volume:
- Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)
- Month:
- November
- Year:
- 2020
- Address:
- Online
- Editors:
- Bonnie Webber, Trevor Cohn, Yulan He, Yang Liu
- Venue:
- EMNLP
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 6792–6812
- Language:
- URL:
- https://aclanthology.org/2020.emnlp-main.552
- DOI:
- 10.18653/v1/2020.emnlp-main.552
- Cite (ACL):
- Julian Michael, Jan A. Botha, and Ian Tenney. 2020. Asking without Telling: Exploring Latent Ontologies in Contextual Representations. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 6792–6812, Online. Association for Computational Linguistics.
- Cite (Informal):
- Asking without Telling: Exploring Latent Ontologies in Contextual Representations (Michael et al., EMNLP 2020)
- PDF:
- https://preview.aclanthology.org/proper-vol2-ingestion/2020.emnlp-main.552.pdf