Ranran Zhen
2025
Taming the Titans: A Survey of Efficient LLM Inference Serving
Ranran Zhen
|
Juntao Li
|
Yixin Ji
|
Zhenlin Yang
|
Tong Liu
|
Qingrong Xia
|
Xinyu Duan
|
Zhefeng Wang
|
Baoxing Huai
|
Min Zhang
Proceedings of the 18th International Natural Language Generation Conference
Large Language Models (LLMs) for Generative AI have achieved remarkable progress, evolving into sophisticated and versatile tools widely adopted across various domains and applications. However, the substantial memory overhead caused by their vast number of parameters, combined with the high computational demands of the attention mechanism, poses significant challenges in achieving low latency and high throughput for LLM inference services. Recent advancements, driven by groundbreaking research, have significantly accelerated progress in this field. This paper provides a comprehensive survey of these methods, covering fundamental instance-level approaches, in-depth cluster-level strategies, and emerging scenarios. At the instance level, we review model placement, request scheduling, decoding length prediction, storage management, and the disaggregation paradigm. At the cluster level, we explore GPU cluster deployment, multi-instance load balancing, and cloud service solutions. Additionally, we discuss specific tasks, modules, and auxiliary methods in emerging scenarios. Finally, we outline potential research directions to further advance the field of LLM inference serving.
2021
Chinese Opinion Role Labeling with Corpus Translation: A Pivot Study
Ranran Zhen
|
Rui Wang
|
Guohong Fu
|
Chengguo Lv
|
Meishan Zhang
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing
Opinion Role Labeling (ORL), aiming to identify the key roles of opinion, has received increasing interest. Unlike most of the previous works focusing on the English language, in this paper, we present the first work of Chinese ORL. We construct a Chinese dataset by manually translating and projecting annotations from a standard English MPQA dataset. Then, we investigate the effectiveness of cross-lingual transfer methods, including model transfer and corpus translation. We exploit multilingual BERT with Contextual Parameter Generator and Adapter methods to examine the potentials of unsupervised cross-lingual learning and our experiments and analyses for both bilingual and multilingual transfers establish a foundation for the future research of this task.
2020
Sentence Matching with Syntax- and Semantics-Aware BERT
Tao Liu
|
Xin Wang
|
Chengguo Lv
|
Ranran Zhen
|
Guohong Fu
Proceedings of the 28th International Conference on Computational Linguistics
Sentence matching aims to identify the special relationship between two sentences, and plays a key role in many natural language processing tasks. However, previous studies mainly focused on exploiting either syntactic or semantic information for sentence matching, and no studies consider integrating both of them. In this study, we propose integrating syntax and semantics into BERT with sentence matching. In particular, we use an implicit syntax and semantics integration method that is less sensitive to the output structure information. Thus the implicit integration can alleviate the error propagation problem. The experimental results show that our approach has achieved state-of-the-art or competitive performance on several sentence matching datasets, demonstrating the benefits of implicitly integrating syntactic and semantic features in sentence matching.
Search
Fix author
Co-authors
- Guohong Fu 2
- Chengguo Lv 2
- Xinyu Duan 1
- Baoxing Huai 1
- Yixin Ji (纪一心) 1
- show all...