Different Ways to Forget: Linguistic Gates in Recurrent Neural Networks
Cristiano Chesi, Veronica Bressan, Matilde Barbini, Achille Fusco, Maria Letizia Piccini Bianchessi, Sofia Neri, Sarah Rossi, Tommaso Sgrizzi
Abstract
This work explores alternative gating systems in simple Recurrent Neural Networks (RNNs) that induce linguistically motivated biases during training, ultimately affecting models’ performance on the BLiMP task. We focus exclusively on the BabyLM 10M training corpus (Strict-Small Track). Our experiments reveal that: (i) standard RNN variants—LSTMs and GRUs—are insufficient for properly learning the relevant set of linguistic constraints; (ii) the quality or size of the training corpus has little impact on these networks, as demonstrated by the comparable performance of LSTMs trained exclusively on the child-directed speech portion of the corpus; (iii) increasing the size of the embedding and hidden layers does not significantly improve performance. In contrast, specifically gated RNNs (eMG-RNNs), inspired by certain Minimalist Grammar intuitions, exhibit advantages in both training loss and BLiMP accuracy.- Anthology ID:
- 2024.conll-babylm.9
- Volume:
- The 2nd BabyLM Challenge at the 28th Conference on Computational Natural Language Learning
- Month:
- November
- Year:
- 2024
- Address:
- Miami, FL, USA
- Editors:
- Michael Y. Hu, Aaron Mueller, Candace Ross, Adina Williams, Tal Linzen, Chengxu Zhuang, Leshem Choshen, Ryan Cotterell, Alex Warstadt, Ethan Gotlieb Wilcox
- Venues:
- CoNLL | BabyLM | WS
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 106–117
- Language:
- URL:
- https://preview.aclanthology.org/jlcl-multiple-ingestion/2024.conll-babylm.9/
- DOI:
- Cite (ACL):
- Cristiano Chesi, Veronica Bressan, Matilde Barbini, Achille Fusco, Maria Letizia Piccini Bianchessi, Sofia Neri, Sarah Rossi, and Tommaso Sgrizzi. 2024. Different Ways to Forget: Linguistic Gates in Recurrent Neural Networks. In The 2nd BabyLM Challenge at the 28th Conference on Computational Natural Language Learning, pages 106–117, Miami, FL, USA. Association for Computational Linguistics.
- Cite (Informal):
- Different Ways to Forget: Linguistic Gates in Recurrent Neural Networks (Chesi et al., CoNLL-BabyLM 2024)
- PDF:
- https://preview.aclanthology.org/jlcl-multiple-ingestion/2024.conll-babylm.9.pdf