PLD+: Accelerating LLM Inference by Leveraging Language Model Artifacts

Shwetha Somasundaram, Anirudh Phukan, Apoorv Saxena


Abstract
To reduce the latency associated with autoretrogressive LLM inference, speculative decoding has emerged as a novel decoding paradigm, where future tokens are drafted and verified in parallel. However, the practical deployment of speculative decoding is hindered by its requirements for additional computational resources and fine-tuning, which limits its out-of-the-box usability. To address these challenges, we present PLD+, a suite of novel algorithms developed to accelerate the inference process of LLMs, particularly for input-guided tasks. These tasks, which include code editing, text editing, summarization, etc., often feature outputs with substantial overlap with their inputs—an attribute PLD+ is designed to exploit. PLD+ also leverages the artifacts (attention and hidden states) generated during inference to accelerate inference speed. We test our approach on five input-guided tasks and through extensive experiments we find that PLD+ outperforms all tuning-free approaches. In the greedy setting, it even outperforms the state-of-the-art tuning-dependent approach EAGLE on four of the tasks. (by a margin of upto 2.31 in terms of avg. speedup). Our approach is tuning free, does not require any additional compute and can easily be used for accelerating inference of any LLM.
Anthology ID:
2025.findings-naacl.338
Volume:
Findings of the Association for Computational Linguistics: NAACL 2025
Month:
April
Year:
2025
Address:
Albuquerque, New Mexico
Editors:
Luis Chiruzzo, Alan Ritter, Lu Wang
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
6075–6089
Language:
URL:
https://preview.aclanthology.org/Ingest-2025-COMPUTEL/2025.findings-naacl.338/
DOI:
Bibkey:
Cite (ACL):
Shwetha Somasundaram, Anirudh Phukan, and Apoorv Saxena. 2025. PLD+: Accelerating LLM Inference by Leveraging Language Model Artifacts. In Findings of the Association for Computational Linguistics: NAACL 2025, pages 6075–6089, Albuquerque, New Mexico. Association for Computational Linguistics.
Cite (Informal):
PLD+: Accelerating LLM Inference by Leveraging Language Model Artifacts (Somasundaram et al., Findings 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/Ingest-2025-COMPUTEL/2025.findings-naacl.338.pdf