‘Indicatements’ that character language models learn English morpho-syntactic units and regularities
Abstract
Character language models have access to surface morphological patterns, but it is not clear whether or how they learn abstract morphological regularities. We instrument a character language model with several probes, finding that it can develop a specific unit to identify word boundaries and, by extension, morpheme boundaries, which allows it to capture linguistic properties and regularities of these units. Our language model proves surprisingly good at identifying the selectional restrictions of English derivational morphemes, a task that requires both morphological and syntactic awareness. Thus we conclude that, when morphemes overlap extensively with the words of a language, a character language model can perform morphological abstraction.- Anthology ID:
- W18-5417
- Volume:
- Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP
- Month:
- November
- Year:
- 2018
- Address:
- Brussels, Belgium
- Venue:
- EMNLP
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 145–153
- Language:
- URL:
- https://aclanthology.org/W18-5417
- DOI:
- 10.18653/v1/W18-5417
- Cite (ACL):
- Yova Kementchedjhieva and Adam Lopez. 2018. ‘Indicatements’ that character language models learn English morpho-syntactic units and regularities. In Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 145–153, Brussels, Belgium. Association for Computational Linguistics.
- Cite (Informal):
- ‘Indicatements’ that character language models learn English morpho-syntactic units and regularities (Kementchedjhieva & Lopez, EMNLP 2018)
- PDF:
- https://preview.aclanthology.org/ingestion-script-update/W18-5417.pdf