A Distributional Perspective on Word Learning in Neural Language Models

Filippo Ficarra, Ryan Cotterell, Alex Warstadt


Abstract
Language models (LMs) are increasingly being studied as models of human language learners.Due to the nascency of the field, it is not well-established whether LMs exhibit similar learning dynamics to humans, and there are few direct comparisons between learning trajectories in humans and models.Word learning trajectories for children are relatively well-documented, and recent work has tried to extend these investigations to language models.However, there are no widely agreed-upon metrics for word learning in language models.We take a distributional approach to this problem, defining lexical knowledge in terms of properties of the learned distribution for a target word.We argue that distributional signatures studied in prior work fail to capture key distributional information. Thus, we propose an array of signatures that improve on earlier approaches by capturing knowledge of both where the target word can and cannot occur as well as gradient preferences about the word’s appropriateness.We obtain learning trajectories for a selection of small language models we train from scratch, study the relationship between different distributional signatures, compare how well they align with human word learning trajectories and interpretable lexical features, and address basic methodological questions about estimating these distributional signatures.Our metrics largely capture complementary information, suggesting that it is important not to rely on a single metric.However, across all metrics, language models’ learning trajectories fail to correlate with those of children.
Anthology ID:
2025.naacl-long.558
Volume:
Proceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)
Month:
April
Year:
2025
Address:
Albuquerque, New Mexico
Editors:
Luis Chiruzzo, Alan Ritter, Lu Wang
Venue:
NAACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
11184–11207
Language:
URL:
https://preview.aclanthology.org/fix-sig-urls/2025.naacl-long.558/
DOI:
Bibkey:
Cite (ACL):
Filippo Ficarra, Ryan Cotterell, and Alex Warstadt. 2025. A Distributional Perspective on Word Learning in Neural Language Models. In Proceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers), pages 11184–11207, Albuquerque, New Mexico. Association for Computational Linguistics.
Cite (Informal):
A Distributional Perspective on Word Learning in Neural Language Models (Ficarra et al., NAACL 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/fix-sig-urls/2025.naacl-long.558.pdf