Simpler neural networks prefer subregular languages

Charles Torres, Richard Futrell


Abstract
We apply a continuous relaxation of L0 regularization (Louizos et al., 2017), which induces sparsity, to study the inductive biases of LSTMs. In particular, we are interested in the patterns of formal languages which are readily learned and expressed by LSTMs. Across a wide range of tests we find sparse LSTMs prefer subregular languages over regular languages and the strength of this preference increases as we increase the pressure for sparsity. Furthermore LSTMs which are trained on subregular languages have fewer non-zero parameters. We conjecture that this subregular bias in LSTMs is related to the cognitive bias for subregular language observed in human phonology which are both downstream of a simplicity bias in a suitable description language.
Anthology ID:
2023.findings-emnlp.112
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2023
Month:
December
Year:
2023
Address:
Singapore
Editors:
Houda Bouamor, Juan Pino, Kalika Bali
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1651–1661
Language:
URL:
https://aclanthology.org/2023.findings-emnlp.112
DOI:
10.18653/v1/2023.findings-emnlp.112
Bibkey:
Cite (ACL):
Charles Torres and Richard Futrell. 2023. Simpler neural networks prefer subregular languages. In Findings of the Association for Computational Linguistics: EMNLP 2023, pages 1651–1661, Singapore. Association for Computational Linguistics.
Cite (Informal):
Simpler neural networks prefer subregular languages (Torres & Futrell, Findings 2023)
Copy Citation:
PDF:
https://preview.aclanthology.org/naacl24-info/2023.findings-emnlp.112.pdf