Scaling LLM Inference Efficiently with Optimized Sample Compute Allocation
Kexun Zhang, Shang Zhou, Danqing Wang, William Yang Wang, Lei Li
Abstract
Sampling is a basic operation for large language models (LLMs). In reinforcement learning rollouts and meta generation algorithms such as Best-of-N, it is essential to sample correct trajectories within a given compute budget. To find an optimal allocation for sample compute budgets, several choices need to be made:Which sampling configurations (model, temperature, language, etc.) to use?How many samples to generate in each configuration?We formulate these choices as a learning problem and propose OSCA, an algorithm that Optimizes Sample Compute Allocation by finding an optimal mix of different inference configurations.Our experiments show that with our learned mixed allocation, we can achieve accuracy better than the best single configuration with 128x less compute on code generation and 25x less compute on 4 reasoning tasks.is also shown to be effective in agentic workflows beyond single-turn tasks, achieving a better accuracy on SWE-Bench with 3x less compute than the default configuration.Our code and generations are released at https://github.com/LeiLiLab/OSCA.- Anthology ID:
- 2025.naacl-long.404
- Volume:
- Proceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)
- Month:
- April
- Year:
- 2025
- Address:
- Albuquerque, New Mexico
- Editors:
- Luis Chiruzzo, Alan Ritter, Lu Wang
- Venue:
- NAACL
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 7959–7973
- Language:
- URL:
- https://preview.aclanthology.org/fix-sig-urls/2025.naacl-long.404/
- DOI:
- Cite (ACL):
- Kexun Zhang, Shang Zhou, Danqing Wang, William Yang Wang, and Lei Li. 2025. Scaling LLM Inference Efficiently with Optimized Sample Compute Allocation. In Proceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers), pages 7959–7973, Albuquerque, New Mexico. Association for Computational Linguistics.
- Cite (Informal):
- Scaling LLM Inference Efficiently with Optimized Sample Compute Allocation (Zhang et al., NAACL 2025)
- PDF:
- https://preview.aclanthology.org/fix-sig-urls/2025.naacl-long.404.pdf