The Warmup Dilemma: How Learning Rate Strategies Impact Speech-to-Text Model Convergence
Marco Gaido, Sara Papi, Luisa Bentivogli, Alessio Brutti, Mauro Cettolo, Roberto Gretter, Marco Matassoni, Mohamed Nabih, Matteo Negri
Abstract
Training large-scale models presents challenges not only in terms of resource requirements but also in terms of their convergence. For this reason, the learning rate (LR) is often decreased when the size of a model is increased. Such a simple solution is not enough in the case of speech-to-text (S2T) trainings, where evolved and more complex variants of the Transformer architecture – e.g., Conformer or Branchformer – are used in light of their better performance. As a workaround, OWSM designed a double linear warmup of the LR, increasing it to a very small value in the first phase before updating it to a higher value in the second phase. While this solution worked well in practice, it was not compared with alternative solutions, nor was the impact on the final performance of different LR warmup schedules studied. This paper fills this gap, revealing that i) large-scale S2T trainings demand a sub-exponential LR warmup, and ii) a higher LR in the warmup phase accelerates initial convergence, but it does not boost final performance.- Anthology ID:
- 2025.iwslt-1.4
- Volume:
- Proceedings of the 22nd International Conference on Spoken Language Translation (IWSLT 2025)
- Month:
- July
- Year:
- 2025
- Address:
- Vienna, Austria (in-person and online)
- Editors:
- Elizabeth Salesky, Marcello Federico, Antonis Anastasopoulos
- Venues:
- IWSLT | WS
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 47–55
- Language:
- URL:
- https://preview.aclanthology.org/acl25-workshop-ingestion/2025.iwslt-1.4/
- DOI:
- Cite (ACL):
- Marco Gaido, Sara Papi, Luisa Bentivogli, Alessio Brutti, Mauro Cettolo, Roberto Gretter, Marco Matassoni, Mohamed Nabih, and Matteo Negri. 2025. The Warmup Dilemma: How Learning Rate Strategies Impact Speech-to-Text Model Convergence. In Proceedings of the 22nd International Conference on Spoken Language Translation (IWSLT 2025), pages 47–55, Vienna, Austria (in-person and online). Association for Computational Linguistics.
- Cite (Informal):
- The Warmup Dilemma: How Learning Rate Strategies Impact Speech-to-Text Model Convergence (Gaido et al., IWSLT 2025)
- PDF:
- https://preview.aclanthology.org/acl25-workshop-ingestion/2025.iwslt-1.4.pdf