Zhenlin Yang
2025
Taming the Titans: A Survey of Efficient LLM Inference Serving
Ranran Zhen
|
Juntao Li
|
Yixin Ji
|
Zhenlin Yang
|
Tong Liu
|
Qingrong Xia
|
Xinyu Duan
|
Zhefeng Wang
|
Baoxing Huai
|
Min Zhang
Proceedings of the 18th International Natural Language Generation Conference
Large Language Models (LLMs) for Generative AI have achieved remarkable progress, evolving into sophisticated and versatile tools widely adopted across various domains and applications. However, the substantial memory overhead caused by their vast number of parameters, combined with the high computational demands of the attention mechanism, poses significant challenges in achieving low latency and high throughput for LLM inference services. Recent advancements, driven by groundbreaking research, have significantly accelerated progress in this field. This paper provides a comprehensive survey of these methods, covering fundamental instance-level approaches, in-depth cluster-level strategies, and emerging scenarios. At the instance level, we review model placement, request scheduling, decoding length prediction, storage management, and the disaggregation paradigm. At the cluster level, we explore GPU cluster deployment, multi-instance load balancing, and cloud service solutions. Additionally, we discuss specific tasks, modules, and auxiliary methods in emerging scenarios. Finally, we outline potential research directions to further advance the field of LLM inference serving.
Search
Fix author
Co-authors
- Xinyu Duan 1
- Baoxing Huai 1
- Yixin Ji (纪一心) 1
- Juntao Li 1
- Tong Liu 1
- show all...
Venues
- inlg1