Maryam Dialameh
2025
ECHO-LLaMA: Efficient Caching for High-Performance LLaMA Training
Maryam Dialameh
|
Rezaul Karim
|
Hossein Rajabzadeh
|
Omar Mohamed Awad
|
Boxing Chen
|
Hyock Ju Kwon
|
Walid Ahmed
|
Yang Liu
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing: Industry Track
This paper introduces ECHO-LLaMA, an efficient LLaMA architecture designed to improve both the training speed and inference throughput of LLaMA architectures while maintaining its learning capacity. ECHO-LLaMA transforms LLaMA models into shared KV caching across certain layers, significantly reducing KV computational complexity while maintaining or improving language performance. Experimental results demonstrate that ECHO-LLaMA achieves up to 77% higher token-per-second throughput during training, up to 16% higher Model FLOPs Utilization (MFU), and up to 14% lower loss when trained on an equal number of tokens. Furthermore, on the 1.1B model, ECHO-LLaMA delivers approximately 7% higher test-time throughput compared to the baseline. By introducing a computationally efficient adaptation mechanism, ECHO-LLaMA offers a scalable and cost-effective solution for pretraining and finetuning large language models, enabling faster and more resource-efficient training without compromising performance.
Search
Fix author
Co-authors
- Walid Ahmed 1
- Boxing Chen 1
- Rezaul Karim 1
- Hyock Ju Kwon 1
- Yang Liu (刘扬) 1
- show all...