Profiling neural grammar induction on morphemically tokenised child-directed speech

Mila Marcheva, Theresa Biberauer, Weiwei Sun


Abstract
We investigate the performance of state-of-the-art (SotA) neural grammar induction (GI) models on a morphemically tokenised English dataset based on the CHILDES treebank (Pearl and Sprouse, 2013). Using implementations from Yang et al. (2021a), we train models and evaluate them with the standard F1 score. We introduce novel evaluation metrics—depth-of-morpheme and sibling-of-morpheme—which measure phenomena around bound morpheme attachment. Our results reveal that models with the highest F1 scores do not necessarily induce linguistically plausible structures for bound morpheme attachment, highlighting a key challenge for cognitively plausible GI.
Anthology ID:
2025.cmcl-1.7
Volume:
Proceedings of the Workshop on Cognitive Modeling and Computational Linguistics
Month:
May
Year:
2025
Address:
Albuquerque, New Mexico, USA
Editors:
Tatsuki Kuribayashi, Giulia Rambelli, Ece Takmaz, Philipp Wicke, Jixing Li, Byung-Doh Oh
Venues:
CMCL | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
47–54
Language:
URL:
https://preview.aclanthology.org/fix-sig-urls/2025.cmcl-1.7/
DOI:
Bibkey:
Cite (ACL):
Mila Marcheva, Theresa Biberauer, and Weiwei Sun. 2025. Profiling neural grammar induction on morphemically tokenised child-directed speech. In Proceedings of the Workshop on Cognitive Modeling and Computational Linguistics, pages 47–54, Albuquerque, New Mexico, USA. Association for Computational Linguistics.
Cite (Informal):
Profiling neural grammar induction on morphemically tokenised child-directed speech (Marcheva et al., CMCL 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/fix-sig-urls/2025.cmcl-1.7.pdf