Xipeng Shen
2025
A Drop-In Solution for On-the-Fly Adaptation of Speculative Decoding in Large Language Models
Jiesong Liu
|
Brian Park
|
Xipeng Shen
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Large Language Models (LLMs) are cutting-edge generative AI models built on transformer architecture, which tend to be highly memory-intensive when performing real-time inference. Various strategies have been developed to enhance the end-to-end inference speed for LLMs, one of which is speculative decoding. This technique involves running a smaller LLM (draft model) for inference over a defined window size, denoted as 𝛾, while simultaneously being validated by the larger LLM (target model). Choosing the optimal 𝛾 value and the draft model is essential for unlocking the potential of speculative decoding. But it is difficult to do due to the complicated influence from various factors, including the nature of the task, the hardware in use, and the combination of the large and small models. This paper introduces *on-the-fly adaption of speculative decoding*, a solution that dynamically adapts the choices to maximize the efficiency of speculative decoding for LLM inferences. As a drop-in solution, it needs no offline benchmarking or training. Experiments show that the solution can lead to 3.55-16.48% speed improvement over the standard speculative decoding, and 1.2-3.4× over the default LLMs.
2023
Co-evolving data-driven and NLU-driven Synthesizers for Generating Code in Domain Growth and Data Scarcity
Jiasheng Gu
|
Zifan Nan
|
Zhiyuan Peng
|
Xipeng Shen
|
Dongkuan Xu
Proceedings of the 2nd Workshop on Pattern-based Approaches to NLP in the Age of Deep Learning
Natural language programming automatically generates code based on a user’s text query. Recent solutions are either data-driven or natural language understanding (NLU)-driven. However, the data-driven synthesizer requires a large number of query-code pairs for training, which hinders its application to low-resource programming languages with growing domains whose functionality and grammar can be actively updated. NLU-driven synthesizers solve this problem, but their code generation is slow and their performance rapidly saturates in the presence of ever-increasing data. In this paper, we propose a circular training framework, Colead, which co-evolves both the data-driven synthesizer and the NLU-driven synthesizer to achieve high-quality code generation in the presence of data scarcity and domain growth. The NLU-driven synthesizer generates query-code pairs to update the data-driven synthesizer, which shares a part of its updated model to improve the NLU-driven synthesizers, enabling the co-evolution of both. Experiments show that Colead gives better results than the baselines in the presence of domain growth and data scarcity, and Colead consistently improves the performance of both data-driven and NLU-driven synthesizers over the co-evolvement.
Search
Fix author
Co-authors
- Jiasheng Gu 1
- Jiesong Liu 1
- Zifan Nan 1
- Brian Park 1
- Zhiyuan Peng 1
- show all...