Abstract
When learning their native language, children acquire the meanings of words and sentences from highly ambiguous input without much explicit supervision. One possible learning mechanism is cross-situational learning, which has been successfully tested in laboratory experiments with children. Here we use Artificial Neural Networks to test if this mechanism scales up to more natural language and visual scenes using a large dataset of crowd-sourced images with corresponding descriptions. We evaluate learning using a series of tasks inspired by methods commonly used in laboratory studies of language acquisition. We show that the model acquires rich semantic knowledge both at the word- and sentence-level, mirroring the patterns and trajectory of learning in early childhood. Our work highlights the usefulness of low-level co-occurrence statistics across modalities in facilitating the early acquisition of higher-level semantic knowledge.- Anthology ID:
- 2021.cmcl-1.24
- Volume:
- Proceedings of the Workshop on Cognitive Modeling and Computational Linguistics
- Month:
- June
- Year:
- 2021
- Address:
- Online
- Venue:
- CMCL
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 200–210
- Language:
- URL:
- https://aclanthology.org/2021.cmcl-1.24
- DOI:
- 10.18653/v1/2021.cmcl-1.24
- Cite (ACL):
- Mitja Nikolaus and Abdellah Fourtassi. 2021. Evaluating the Acquisition of Semantic Knowledge from Cross-situational Learning in Artificial Neural Networks. In Proceedings of the Workshop on Cognitive Modeling and Computational Linguistics, pages 200–210, Online. Association for Computational Linguistics.
- Cite (Informal):
- Evaluating the Acquisition of Semantic Knowledge from Cross-situational Learning in Artificial Neural Networks (Nikolaus & Fourtassi, CMCL 2021)
- PDF:
- https://preview.aclanthology.org/ingestion-script-update/2021.cmcl-1.24.pdf
- Code
- mitjanikolaus/cross-situational-learning-abstract-scenes
- Data
- COCO