Towards Coarse-to-Fine Evaluation of Inference Efficiency for Large Language Models

Yushuo Chen, Tianyi Tang, Erge Xiang, Linjiang Li, Xin Zhao, Jing Wang, Yunpeng Chai, Ji-Rong Wen


Abstract
"In real world, large language models (LLMs) can serve as the assistant to help users accomplish their jobs, and also support the development of advanced applications. For the wide application ofLLMs, the inference efficiency is an essential concern, which has been widely studied in existing work, and numerous optimization algorithms and code libraries have been proposed to improve it.Nonetheless, users still find it challenging to compare the effectiveness of all the above method sand understand the underlying mechanisms. In this work, we propose a coarse-to-fine method that encompasses both experimental and analytical components. This method can be applied across various models and inference libraries. Specifically, we examine four usage scenarios within two practical applications. We further provide both theoretical and empirical fine-grained analyses of each module in the Transformer architecture. Our methods can be a general and invaluable method for researchers to evaluate various code libraries and improve inference strategies across different LLMs. We open-source the supporting dataset, code, and evaluation scripts at the link:https://github.com/RUCAIBox/Inference-Efficiency-Evaluation."
Anthology ID:
2025.ccl-1.75
Volume:
Proceedings of the 24th China National Conference on Computational Linguistics (CCL 2025)
Month:
August
Year:
2025
Address:
Jinan, China
Editors:
Maosong Sun, Peiyong Duan, Zhiyuan Liu, Ruifeng Xu, Weiwei Sun
Venue:
CCL
SIG:
Publisher:
Chinese Information Processing Society of China
Note:
Pages:
985–1002
Language:
URL:
https://preview.aclanthology.org/ingest-ccl/2025.ccl-1.75/
DOI:
Bibkey:
Cite (ACL):
Yushuo Chen, Tianyi Tang, Erge Xiang, Linjiang Li, Xin Zhao, Jing Wang, Yunpeng Chai, and Ji-Rong Wen. 2025. Towards Coarse-to-Fine Evaluation of Inference Efficiency for Large Language Models. In Proceedings of the 24th China National Conference on Computational Linguistics (CCL 2025), pages 985–1002, Jinan, China. Chinese Information Processing Society of China.
Cite (Informal):
Towards Coarse-to-Fine Evaluation of Inference Efficiency for Large Language Models (Chen et al., CCL 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-ccl/2025.ccl-1.75.pdf