Mask and You Shall Receive: Optimizing Masked Language Modeling For Pretraining BabyLMs

Lukas Edman, Alexander Fraser


Abstract
We describe our strategy for the 2025 edition of the BabyLM Challenge. Our main contribution is that of an improved form of Masked Language Modeling (MLM), which adapts the probabilities of the tokens masked according to the model’s ability to predict them. The results show a substantial increase in performance on (Super)GLUE tasks over the standard MLM. We also incorporate sub-token embeddings, finding that this increases the model’s morphological generalization capabilities. Our submission beats the baseline in the strict-small track.
Anthology ID:
2025.babylm-main.31
Volume:
Proceedings of the First BabyLM Workshop
Month:
November
Year:
2025
Address:
Suzhou, China
Editors:
Lucas Charpentier, Leshem Choshen, Ryan Cotterell, Mustafa Omer Gul, Michael Y. Hu, Jing Liu, Jaap Jumelet, Tal Linzen, Aaron Mueller, Candace Ross, Raj Sanjay Shah, Alex Warstadt, Ethan Gotlieb Wilcox, Adina Williams
Venue:
BabyLM
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
445–453
Language:
URL:
https://preview.aclanthology.org/ingest-emnlp/2025.babylm-main.31/
DOI:
Bibkey:
Cite (ACL):
Lukas Edman and Alexander Fraser. 2025. Mask and You Shall Receive: Optimizing Masked Language Modeling For Pretraining BabyLMs. In Proceedings of the First BabyLM Workshop, pages 445–453, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
Mask and You Shall Receive: Optimizing Masked Language Modeling For Pretraining BabyLMs (Edman & Fraser, BabyLM 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-emnlp/2025.babylm-main.31.pdf