Xianzhi Yu
2025
Faster and Better LLMs via Latency-Aware Test-Time Scaling
Zili Wang
|
Tianyu Zhang
|
Haoli Bai
|
Lu Hou
|
Xianzhi Yu
|
Wulong Liu
|
Shiming Xiang
|
Lei Zhu
Findings of the Association for Computational Linguistics: EMNLP 2025
Test-Time Scaling (TTS) has proven effective in improving the performance of Large Language Models (LLMs) during inference. However, existing research has overlooked the efficiency of TTS from a latency-sensitive perspective. Through a latency-aware evaluation of representative TTS methods, we demonstrate that a compute-optimal TTS does not always result in the lowest latency in scenarios where latency is critical. To address this gap and achieve latency-optimal TTS, we propose two key approaches by optimizing the concurrency configurations: (1) branch-wise parallelism, which leverages multiple concurrent inference branches, and (2) sequence-wise parallelism, enabled by speculative decoding. By integrating these two approaches and allocating computational resources properly to each, our latency-optimal TTS enables a 32B model to reach 82.3% accuracy on MATH-500 within 1 minute and a smaller 3B model to achieve 72.4% within 10 seconds. Our work emphasizes the importance of latency-aware TTS and demonstrates its ability to deliver both speed and accuracy in latency-sensitive scenarios.
2022
HW-TSC’s Submission for the WMT22 Efficiency Task
Hengchao Shang
|
Ting Hu
|
Daimeng Wei
|
Zongyao Li
|
Xianzhi Yu
|
Jianfei Feng
|
Ting Zhu
|
Lizhi Lei
|
Shimin Tao
|
Hao Yang
|
Ying Qin
|
Jinlong Yang
|
Zhiqiang Rao
|
Zhengzhe Yu
Proceedings of the Seventh Conference on Machine Translation (WMT)
This paper presents the submission of Huawei Translation Services Center (HW-TSC) to WMT 2022 Efficiency Shared Task. For this year’s task, we still apply sentence-level distillation strategy to train small models with different configurations. Then, we integrate the average attention mechanism into the lightweight RNN model to pursue more efficient decoding. We tried adding a retrain step to our 8-bit and 4-bit models to achieve a balance between model size and quality. We still use Huawei Noah’s Bolt for INT8 inference and 4-bit storage. Coupled with Bolt’s support for batch inference and multi-core parallel computing, we finally submit models with different configurations to the CPU latency and throughput tracks to explore the Pareto frontiers.