Abstract
A recurrent neural network model of phonological pattern learning is proposed. The model is a relatively simple neural network with one recurrent layer, and displays biases in learning that mimic observed biases in human learning. Single-feature patterns are learned faster than two-feature patterns, and vowel or consonant-only patterns are learned faster than patterns involving vowels and consonants, mimicking the results of laboratory learning experiments. In non-recurrent models, capturing these biases requires the use of alpha features or some other representation of repeated features, but with a recurrent neural network, these elaborations are not necessary.- Anthology ID:
- W17-0705
- Volume:
- Proceedings of the 7th Workshop on Cognitive Modeling and Computational Linguistics (CMCL 2017)
- Month:
- April
- Year:
- 2017
- Address:
- Valencia, Spain
- Editors:
- Ted Gibson, Tal Linzen, Asad Sayeed, Martin van Schijndel, William Schuler
- Venue:
- CMCL
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 35–40
- Language:
- URL:
- https://aclanthology.org/W17-0705
- DOI:
- 10.18653/v1/W17-0705
- Cite (ACL):
- Amanda Doucette. 2017. Inherent Biases of Recurrent Neural Networks for Phonological Assimilation and Dissimilation. In Proceedings of the 7th Workshop on Cognitive Modeling and Computational Linguistics (CMCL 2017), pages 35–40, Valencia, Spain. Association for Computational Linguistics.
- Cite (Informal):
- Inherent Biases of Recurrent Neural Networks for Phonological Assimilation and Dissimilation (Doucette, CMCL 2017)
- PDF:
- https://preview.aclanthology.org/nschneid-patch-2/W17-0705.pdf