Fanjin Zhang
2025
SAM Decoding: Speculative Decoding via Suffix Automaton
Yuxuan Hu
|
Ke Wang
|
Xiaokang Zhang
|
Fanjin Zhang
|
Cuiping Li
|
Hong Chen
|
Jing Zhang
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Speculative decoding (SD) has been demonstrated as an effective technique for lossless LLM inference acceleration.Retrieval-based SD methods, one kind of model-free method, have yielded promising speedup, but they often rely on single retrieval resources, inefficient retrieval methods, and are constrained to certain tasks. This paper presents a novel retrieval-based speculative decoding method that adapts the suffix automaton (SAM) for efficient and accurate draft generation by utilizing the generating text sequence and static text corpus. Unlike existing n-gram matching methods, SAM-Decoding finds the exact longest suffix match, achieving an average time complexity of O(1) per generation step of SAM update and suffix retrieval.It can also integrate with existing methods, adaptively selecting a draft generation strategy based on match length to generalize to broader domains. Extensive experiments on Spec-Bench show that our method is 18% faster than other retrieval-based SD methods. Additionally, when combined with advanced EAGLE-2, it provides an additional speedup of 3.28% – 11.13% across various-sized LLM backbones.
Search
Fix author
Co-authors
- Hong Chen 1
- Yuxuan Hu 1
- Cuiping Li 1
- Ke Wang 1
- Xiaokang Zhang 1
- show all...
Venues
- acl1