Jui-Hung Cheng
2025
Efficient Beam Search for Large Language Models Using Trie-Based Decoding
Brian J Chan
|
Mao-xun Huang
|
Jui-Hung Cheng
|
Chao-Ting Chen
|
Hen-Hsen Huang
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
This work presents a novel trie (prefix-tree)-based parallel decoding method that addresses the memory inefficiency of batch-based beam search. By sharing a single KV cache across beams with common prefixes, our approach dramatically reduces memory usage and enables efficient decoding. We evaluated our method across three attention architectures, Multi-Head Attention (Phi-3.5-mini-instruct), Grouped Query Attention (Llama-3.1-8B-Instruct), and Sliding Window Attention (Mistral-Small-24B-Instruct-2501), using CNN/DailyMail for abstractive summarization and HumanEval for code generation. Our experiments demonstrate substantial memory savings (4–8×) and up to 2.4× faster decoding, without compromising generation quality. These results highlight our method’s suitability for memory-constrained environments and large-scale deployments.