@inproceedings{le-etal-2025-spectra,
    title = "{SPECTRA}: Faster Large Language Model Inference with Optimized Internal and External Speculation",
    author = "Le, Nguyen-Khang  and
      Do, Truong Dinh  and
      Nguyen, Le-Minh",
    editor = "Che, Wanxiang  and
      Nabende, Joyce  and
      Shutova, Ekaterina  and
      Pilehvar, Mohammad Taher",
    booktitle = "Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
    month = jul,
    year = "2025",
    address = "Vienna, Austria",
    publisher = "Association for Computational Linguistics",
    url = "https://preview.aclanthology.org/ingest-emnlp/2025.acl-long.685/",
    doi = "10.18653/v1/2025.acl-long.685",
    pages = "14015--14034",
    ISBN = "979-8-89176-251-0",
    abstract = "Inference with modern Large Language Models (LLMs) is both computationally expensive and time-consuming. Speculative decoding has emerged as a promising solution, but existing approaches face key limitations: training-based methods require a draft model that is challenging to obtain and lacks generalizability, while training-free methods offer limited speedup gains. In this work, we present Spectra, a novel framework for accelerating LLM inference without the need for additional training or modification to the original LLM. Spectra introduces two new techniques for efficiently utilizing internal and external speculation, each outperforming corresponding state-of-the-art (SOTA) methods independently. When combined, these techniques achieve up to a 4.08x speedup across various benchmarks and LLM architectures, significantly surpassing existing training-free approaches. The implementation of Spectra is publicly available."
}Markdown (Informal)
[SPECTRA: Faster Large Language Model Inference with Optimized Internal and External Speculation](https://preview.aclanthology.org/ingest-emnlp/2025.acl-long.685/) (Le et al., ACL 2025)
ACL