What is the Best Sequence Length for BabyLM?
Suchir Salhan, Richard Diehl Martinez, Zebulon Goriely, Paula Buttery
Abstract
Transformer language models typically operate with a fixed-length context window, which has grown in step with large-scale pretraining datasets. In the BabyLM Challenge, however, many past submissions have defaulted to using much shorter sequence lengths. We examine the impact of sequence length on BabyLM pretraining, to answer the simple question: what sequence length should we be using when training Baby LMs? Using 100M-word training data and fixed compute budgets, we compare 125M-parameter Mamba and OPT models, finding that although longer is often better, the optimal length depends on both task and architecture. Shorter sequences are sufficient for grammatical generalization tasks whereas longer contexts benefit morphological analogical reasoning tasks.- Anthology ID:
- 2025.babylm-main.10
- Volume:
- Proceedings of the First BabyLM Workshop
- Month:
- November
- Year:
- 2025
- Address:
- Suzhou, China
- Editors:
- Lucas Charpentier, Leshem Choshen, Ryan Cotterell, Mustafa Omer Gul, Michael Y. Hu, Jing Liu, Jaap Jumelet, Tal Linzen, Aaron Mueller, Candace Ross, Raj Sanjay Shah, Alex Warstadt, Ethan Gotlieb Wilcox, Adina Williams
- Venue:
- BabyLM
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 130–146
- Language:
- URL:
- https://preview.aclanthology.org/ingest-emnlp/2025.babylm-main.10/
- DOI:
- Cite (ACL):
- Suchir Salhan, Richard Diehl Martinez, Zebulon Goriely, and Paula Buttery. 2025. What is the Best Sequence Length for BabyLM?. In Proceedings of the First BabyLM Workshop, pages 130–146, Suzhou, China. Association for Computational Linguistics.
- Cite (Informal):
- What is the Best Sequence Length for BabyLM? (Salhan et al., BabyLM 2025)
- PDF:
- https://preview.aclanthology.org/ingest-emnlp/2025.babylm-main.10.pdf