Looking to Learn: Token-wise Dynamic Gating for Low-Resource Vision-Language Modelling
Bianca-Mihaela Ganescu, Suchir Salhan, Andrew Caines, Paula Buttery
Abstract
Training vision-language models on cognitively-plausible amounts of data requires rethinking how models integrate multimodal information. Within the constraints of the Vision track for the BabyLM Challenge 2025, we propose a lightweight decoder-based architecture with (1) token-wise dynamic gating for adaptive fusion of linguistic and visual cues, (2) feature modulation and channel attention to maximise the utility of limited visual information and (3) auxiliary contrastive objectives for visual grounding. Evaluation on five benchmarks (BLiMP, BLiMP Supplement, EWoK, Winoground and VQA) shows competitive or superior performance to multimodal baselines. More notably, our dynamic gate discovers interpretable patterns without explicit supervision, favouring visual cues for content words and linguistic cues for function words. While we identify limitations in the Challenge constraints, such as the information bottleneck created by global image embeddings and training instability from the dataset split, our findings establish dynamic gating as a powerful tool for efficient multimodal learning, offering both interpretability and performance even under severe constraints.- Anthology ID:
- 2025.babylm-main.15
- Volume:
- Proceedings of the First BabyLM Workshop
- Month:
- November
- Year:
- 2025
- Address:
- Suzhou, China
- Editors:
- Lucas Charpentier, Leshem Choshen, Ryan Cotterell, Mustafa Omer Gul, Michael Y. Hu, Jing Liu, Jaap Jumelet, Tal Linzen, Aaron Mueller, Candace Ross, Raj Sanjay Shah, Alex Warstadt, Ethan Gotlieb Wilcox, Adina Williams
- Venue:
- BabyLM
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 192–217
- Language:
- URL:
- https://preview.aclanthology.org/author-page-yu-wang-polytechnic/2025.babylm-main.15/
- DOI:
- 10.18653/v1/2025.babylm-main.15
- Cite (ACL):
- Bianca-Mihaela Ganescu, Suchir Salhan, Andrew Caines, and Paula Buttery. 2025. Looking to Learn: Token-wise Dynamic Gating for Low-Resource Vision-Language Modelling. In Proceedings of the First BabyLM Workshop, pages 192–217, Suzhou, China. Association for Computational Linguistics.
- Cite (Informal):
- Looking to Learn: Token-wise Dynamic Gating for Low-Resource Vision-Language Modelling (Ganescu et al., BabyLM 2025)
- PDF:
- https://preview.aclanthology.org/author-page-yu-wang-polytechnic/2025.babylm-main.15.pdf