Benyamin Jamialahmadi
2025
Balcony: A Lightweight Approach to Dynamic Inference of Generative Language Models
Benyamin Jamialahmadi
|
Parsa Kavehzadeh
|
Mehdi Rezagholizadeh
|
Parsa Farinneya
|
Hossein Rajabzadeh
|
Aref Jafari
|
Boxing Chen
|
Marzieh S. Tahaei
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Deploying large language models (LLMs) in real-world applications is often hindered by strict computational and latency constraints. While dynamic inference offers the flexibility to adjust model behavior based on varying resource budgets, existing methods are frequently limited by hardware inefficiencies or performance degradation. In this paper, we introduce Balcony, a simple yet highly effective framework for depth-based dynamic inference. By freezing the pretrained LLM and inserting additional transformer layers at selected exit points, Balcony maintains the full model’s performance while enabling real-time adaptation to different computational budgets. These additional layers are trained using a straightforward self-distillation loss, aligning the sub-model outputs with those of the full model. This approach requires significantly fewer training tokens and tunable parameters, drastically reducing computational costs compared to prior methods. When applied to the LLaMA3-8B model, using only 0.2% of the original pretraining data, Balcony achieves minimal performance degradation while enabling significant speedups. Remarkably, we show that Balcony outperforms state-of-the-art methods such as Flextron and Layerskip on multiple models at various scales, as well as other leading compression techniques across a variety of benchmarks.
Search
Fix author
Co-authors
- Boxing Chen 1
- Parsa Farinneya 1
- Aref Jafari 1
- Parsa Kavehzadeh 1
- Hossein Rajabzadeh 1
- show all...