Speculative Sampling via Exponential Races

Szymon Kobus, Deniz Gunduz


Abstract
Speculative decoding accelerates large language model inference using a smaller draft model. In this paper, we establish a surprising connection between speculative sampling and the concept of channel simulation from information theory, which aims at simulating a noisy channel using as few bits as possible. This connection allows us to provide an information-theoretic analysis of the speed up that can be achieved by speculative sampling. Leveraging this link, we derive an explicit relation between generation speed-up and the number of tokens k generated by the draft model for large k, which serves as an upper bound for all k. We also propose a novel speculative sampling method via exponential races called ERSS that matches state-of-the-art performance.
Anthology ID:
2025.findings-acl.936
Volume:
Findings of the Association for Computational Linguistics: ACL 2025
Month:
July
Year:
2025
Address:
Vienna, Austria
Editors:
Wanxiang Che, Joyce Nabende, Ekaterina Shutova, Mohammad Taher Pilehvar
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
18189–18204
Language:
URL:
https://preview.aclanthology.org/display_plenaries/2025.findings-acl.936/
DOI:
Bibkey:
Cite (ACL):
Szymon Kobus and Deniz Gunduz. 2025. Speculative Sampling via Exponential Races. In Findings of the Association for Computational Linguistics: ACL 2025, pages 18189–18204, Vienna, Austria. Association for Computational Linguistics.
Cite (Informal):
Speculative Sampling via Exponential Races (Kobus & Gunduz, Findings 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/display_plenaries/2025.findings-acl.936.pdf