Corey D Barrett
2025
Beyond instruction-conditioning, MoTE: Mixture of Task Experts for Multi-task Embedding Models
Miguel Romero Calvo
|
Shuoyang Ding
|
Corey D Barrett
|
Georgiana Dinu
|
George Karypis
Findings of the Association for Computational Linguistics: ACL 2025
Dense embeddings are fundamental to modern machine learning systems, powering Retrieval-Augmented Generation (RAG), information retrieval, and representation learning. While instruction-conditioning has become the dominant approach for embedding specialization, its direct application to low-capacity models imposes fundamental representational constraints that limit the performance gains derived from specialization. In this paper, we analyze these limitations and introduce the Mixture of Task Experts (MoTE) transformer block, which leverages task-specialized parameters trained with Task-Aware Contrastive Learning () to enhance the model’s ability to generate specialized embeddings. Empirical results show that MoTE achieves 64% higher performance gains in retrieval datasets (+3.27→ +5.21) and 43% higher performance gains across all datasets (+1.81→ 2.60). Critically, these gains are achieved without altering instructions, training data, inference time, or number of active parameters.
2024
Hop, skip, jump to Convergence: Dynamics of Learning Rate Transitions for Improved Training of Large Language Models
Shreyas Subramanian
|
Vignesh Ganapathiraman
|
Corey D Barrett
Findings of the Association for Computational Linguistics: EMNLP 2024
Various types of learning rate (LR) schedulers are being used for training or fine tuning of Large Language Models today. In practice, several mid-flight changes are required in the LR schedule either manually, or with careful choices around warmup steps, peak LR, type of decay and restarts. To study this further, we consider the effect of switching the learning rate at a predetermined time during training, which we refer to as “SkipLR”. We model SGD as a stochastic gradient flow and show that when starting from the same initial parameters, switching the learning rate causes the loss curves to contract towards each other. We demonstrate this theoretically for some simple cases, and empirically on large language models. Our analysis provides insight into how learning rate schedules affect the training dynamics, and could inform the design of new schedules to accelerate convergence.
Search
Fix author
Co-authors
- Miguel Romero Calvo 1
- Shuoyang Ding 1
- Georgiana Dinu 1
- Vignesh Ganapathiraman 1
- George Karypis 1
- show all...