Taming the Titans: A Survey of Efficient LLM Inference Serving

Ranran Zhen, Juntao Li, Yixin Ji, Zhenlin Yang, Tong Liu, Qingrong Xia, Xinyu Duan, Zhefeng Wang, Baoxing Huai, Min Zhang


Abstract
Large Language Models (LLMs) for Generative AI have achieved remarkable progress, evolving into sophisticated and versatile tools widely adopted across various domains and applications. However, the substantial memory overhead caused by their vast number of parameters, combined with the high computational demands of the attention mechanism, poses significant challenges in achieving low latency and high throughput for LLM inference services. Recent advancements, driven by groundbreaking research, have significantly accelerated progress in this field. This paper provides a comprehensive survey of these methods, covering fundamental instance-level approaches, in-depth cluster-level strategies, and emerging scenarios. At the instance level, we review model placement, request scheduling, decoding length prediction, storage management, and the disaggregation paradigm. At the cluster level, we explore GPU cluster deployment, multi-instance load balancing, and cloud service solutions. Additionally, we discuss specific tasks, modules, and auxiliary methods in emerging scenarios. Finally, we outline potential research directions to further advance the field of LLM inference serving.
Anthology ID:
2025.inlg-main.32
Volume:
Proceedings of the 18th International Natural Language Generation Conference
Month:
October
Year:
2025
Address:
Hanoi, Vietnam
Editors:
Lucie Flek, Shashi Narayan, Lê Hồng Phương, Jiahuan Pei
Venue:
INLG
SIG:
SIGGEN
Publisher:
Association for Computational Linguistics
Note:
Pages:
522–541
Language:
URL:
https://preview.aclanthology.org/ingest-luhme/2025.inlg-main.32/
DOI:
Bibkey:
Cite (ACL):
Ranran Zhen, Juntao Li, Yixin Ji, Zhenlin Yang, Tong Liu, Qingrong Xia, Xinyu Duan, Zhefeng Wang, Baoxing Huai, and Min Zhang. 2025. Taming the Titans: A Survey of Efficient LLM Inference Serving. In Proceedings of the 18th International Natural Language Generation Conference, pages 522–541, Hanoi, Vietnam. Association for Computational Linguistics.
Cite (Informal):
Taming the Titans: A Survey of Efficient LLM Inference Serving (Zhen et al., INLG 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-luhme/2025.inlg-main.32.pdf