Slamming: Training a Speech Language Model on One GPU in a Day

Gallil Maimon, Avishai Elmakies, Yossi Adi


Abstract
We introduce *Slam*, a recipe for training high-quality Speech Language Models (SLMs) on a single academic GPU in 24 hours. We do so through empirical analysis of model initialisation and architecture, synthetic training data, preference optimisation with synthetic data and tweaking all other components. We empirically demonstrate that this training recipe also scales well with more compute getting results on par with leading SLMs in a fraction of the compute cost. We hope these insights will make SLM training and research more accessible. In the context of SLM scaling laws, our results far outperform predicted compute optimal performance, giving an optimistic view to SLM feasibility. See code, data, models, samples - https://pages.cs.huji.ac.il/adiyoss-lab/slamming .
Anthology ID:
2025.findings-acl.631
Volume:
Findings of the Association for Computational Linguistics: ACL 2025
Month:
July
Year:
2025
Address:
Vienna, Austria
Editors:
Wanxiang Che, Joyce Nabende, Ekaterina Shutova, Mohammad Taher Pilehvar
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
12201–12216
Language:
URL:
https://preview.aclanthology.org/display_plenaries/2025.findings-acl.631/
DOI:
Bibkey:
Cite (ACL):
Gallil Maimon, Avishai Elmakies, and Yossi Adi. 2025. Slamming: Training a Speech Language Model on One GPU in a Day. In Findings of the Association for Computational Linguistics: ACL 2025, pages 12201–12216, Vienna, Austria. Association for Computational Linguistics.
Cite (Informal):
Slamming: Training a Speech Language Model on One GPU in a Day (Maimon et al., Findings 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/display_plenaries/2025.findings-acl.631.pdf