@inproceedings{ou-etal-2024-lossless,
    title = "Lossless Acceleration of Large Language Model via Adaptive N-gram Parallel Decoding",
    author = "Ou, Jie  and
      Chen, Yueming  and
      Tian, Prof.",
    editor = "Yang, Yi  and
      Davani, Aida  and
      Sil, Avi  and
      Kumar, Anoop",
    booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 6: Industry Track)",
    month = jun,
    year = "2024",
    address = "Mexico City, Mexico",
    publisher = "Association for Computational Linguistics",
    url = "https://preview.aclanthology.org/ingest-emnlp/2024.naacl-industry.2/",
    doi = "10.18653/v1/2024.naacl-industry.2",
    pages = "10--22",
    abstract = "While Large Language Models (LLMs) have shown remarkable abilities, they are hindered by significant resource consumption and considerable latency due to autoregressive processing. In this study, we introduce Adaptive N-gram Parallel Decoding (ANPD), an innovative and lossless approach that accelerates inference by allowing the simultaneous generation of multiple tokens. ANPD incorporates a two-stage approach: it begins with a rapid drafting phase that employs an N-gram module, which adapts based on the current interactive context, followed by a verification phase, during which the original LLM assesses and confirms the proposed tokens. Consequently, ANPD preserves the integrity of the LLM{'}s original output while enhancing processing speed. We further leverage a multi-level architecture for the N-gram module to enhance the precision of the initial draft, consequently reducing inference latency. ANPD eliminates the need for retraining or extra GPU memory, making it an efficient and plug-and-play enhancement. In our experiments, models such as LLaMA and its fine-tuned variants have shown speed improvements up to 3.67x, validating the effectiveness of our proposed ANPD."
}Markdown (Informal)
[Lossless Acceleration of Large Language Model via Adaptive N-gram Parallel Decoding](https://preview.aclanthology.org/ingest-emnlp/2024.naacl-industry.2/) (Ou et al., NAACL 2024)
ACL