Benchmarking the Energy Savings with Speculative Decoding Strategies

Rohit Dutta, Paramita Koley, Soham Poddar, Janardan Misra, Sanjay Podder, Naveen Balani, Saptarshi Ghosh, Niloy Ganguly


Abstract
Speculative decoding has emerged as an effective method to reduce latency and inference cost of LLM inferences. However, there has been inadequate attention towards the energy requirements of these models. To address this gap, this paper presents a comprehensive survey of energy requirements of speculative decoding strategies, with detailed analysis on how various factors – model size and family, speculative decoding strategies, and dataset characteristics – influence the energy optimizations.
Anthology ID:
2026.findings-eacl.249
Volume:
Findings of the Association for Computational Linguistics: EACL 2026
Month:
March
Year:
2026
Address:
Rabat, Morocco
Editors:
Vera Demberg, Kentaro Inui, Lluís Marquez
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
4737–4748
Language:
URL:
https://preview.aclanthology.org/ingest-eacl/2026.findings-eacl.249/
DOI:
Bibkey:
Cite (ACL):
Rohit Dutta, Paramita Koley, Soham Poddar, Janardan Misra, Sanjay Podder, Naveen Balani, Saptarshi Ghosh, and Niloy Ganguly. 2026. Benchmarking the Energy Savings with Speculative Decoding Strategies. In Findings of the Association for Computational Linguistics: EACL 2026, pages 4737–4748, Rabat, Morocco. Association for Computational Linguistics.
Cite (Informal):
Benchmarking the Energy Savings with Speculative Decoding Strategies (Dutta et al., Findings 2026)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-eacl/2026.findings-eacl.249.pdf
Checklist:
 2026.findings-eacl.249.checklist.pdf